PBN build

Discussion in 'Case Studies and Journals' started by mstchr, Dec 27, 2016.

Share This Page

  1. mstchr

    mstchr Active Member

    Joined:
    Dec 10, 2015
    Messages:
    54
    Likes Received:
    69
    This will be my first real earnest PBN. I've built out supporting sites for many different money sites over the years but I've never got around to building a PBN just for the sake of having one.

    Things I've been reading that make sense to me:

    Make the sites look good
    Invest in good content
    Build in small chunks, finishing sites progressively, not building lots of sites at once​

    I intend to use a healthy mix of wordpress and HTML templates with php includes, maybe some concrete5 or joomla or whatnot. The faster the better because I don't like fucking around all day in the backend of shitty CMSs like wordpress.

    I would very much like to host my sites with Amazon/Digital Ocean/Linode/etc, we'll see how that goes since I suck at servers and programming in general.

    This PBN will be thematic, mostly centered around my current location in the world. I intend to use this PBN to rank money sites that I will sell or rent to businesses. I've really come to like this form of revenue generation. I also have a long ball to play with this network based on a big event a few years down the road.

    More to come soon. I'm looking forward to learning from anyone here who knows their PBNs.

    Cheers and season's greetings!
     
    cardine and Anaconda like this.
  2. cardine

    cardine Administrator Staff Member

    Joined:
    Dec 9, 2015
    Messages:
    1,058
    Likes Received:
    1,022
    Looking forward to following your progress on this! We had an old thread where we talked a little bit about setting up PBNs - although I'm sure none of that info will come as any surprise to you.

    I think the most important parts are doing everything possible to minimize footprints, and that they are centered very specifically on one niche, which it sounds very much so like you are already doing.

    The only other thing I see people very often struggle with is that they get way too greedy with their PBNs. They should look like real sites, they should link out to other people just as much as to you, and often they shouldn't even link out to anyone. People hate doing work on a PBN that isn't immediately impacting their money sites and that's where short cuts creep in that make a site look way spammier. I've even seen great results from PBNs that were created specifically for one or two money sites that had the quality of money sites themselves, and they were then powered by a spammier tier of (still powerful) PBNs.

    But in general I don't think PBNs are incredibly complicated - just be diligent with putting in the work and making sure none of the sites are neglected or abused and the results will be very powerful.
     
  3. mstchr

    mstchr Active Member

    Joined:
    Dec 10, 2015
    Messages:
    54
    Likes Received:
    69
    Nice, yeah I'll read through that thread again though...I just read the thread here where the guy was stubborn about finding his own domains, actually learned a couple things there thanks to @Golan.

    Niche-specific because that brings strength to the links?

    Nice, yeah the tiered PBN thing sounds like a powerful thing if set up right. I'll keep that in mind, although I think this will be a slow and steady process. I'm not worried about the liinks, I'm very conservative with all SEO work and it generally pays off well...I'm the guy who tells web2.0 setup service providers to not add my links when they make the properties, just send me the credentials and I'll place the links later :p

    Good advice, thanks.

    Question: I'm going to be reg'ing at least a handful of fresh domains for this network, so the sites won't come with any juice, as with the expired domains. I haven't done a lot of linkbuilding lately...if I want to get the link juice flowing to these properties for later use in ranking a couple money sites, what's the stock in trade these days? I intend to set up social accounts and IFTTT. Beyond that, what are some recommended link types to power pbn sites?
     
    Golan likes this.
  4. terrycody

    terrycody Member

    Joined:
    Apr 20, 2016
    Messages:
    25
    Likes Received:
    4
    It's comes down to very versatile when building PBN links, you can literally do whatever you want, just mix PBN links with generally links.

    As already with expired domains, PBN sites often got enough link juice and wait to flow in your sites pages, what you said build social signals IS indeed a very good way to make PBN looks natural and I have read some cases it surely helped ranking.( though someone claimed it is no use)

    some patterns you may interested:

    for example: PBN A, with 20 posts, each post with multiple social signals, and each post you can link to one of your money site page, just to be cautious, you better link only ONCE to your money site as it is look natural this way. You can random post some non-link posts, to make it even more human-deable.

    you can send another tier of links to your PBN A, eg, GSA links or even cheaper PBN links, though this is an traditional way, or you just stop build links to your PBN, it is up to you.

    A more safe way to do PBN links these days is you get guest post from other sites, and build PBN links to that page, though this is sound like a bit immoral.
     
  5. mstchr

    mstchr Active Member

    Joined:
    Dec 10, 2015
    Messages:
    54
    Likes Received:
    69
    So much spam!

    I had no idea just how many expired domains have been spammed these days, it's out of control. Now I understand why services exist to find good ones, it's an excruciating process.

    Can anyone vouch for hammerhead domains' topical lists?
     
  6. terrycody

    terrycody Member

    Joined:
    Apr 20, 2016
    Messages:
    25
    Likes Received:
    4

    There are AT LEAST, thousands of people and bots crawling the web 7/24 per day around the world, I myself happen to know a guru, I would say, maybe no one will be smarter than him in this domain scrape niche, no kidding. Even him, is trying hard to find the domains for customers, its really really hard, even you are very clever, it still needs you a lot of time!

    In other words, in some extend, its not worth for you to scrape yourself, if you spend too much time and still can not find or just very few clean ones but lost years time.

    And for hammerhead topics, I would say, NONE. If is all, only brainstormed by true GURU once in a while.
     
    mstchr and Golan like this.
  7. cardine

    cardine Administrator Staff Member

    Joined:
    Dec 9, 2015
    Messages:
    1,058
    Likes Received:
    1,022
    Yes - from all evidence I've seen I strongly believe that Google cares about the topicality and relevance of the page/site that the link is on almost as much as the actual anchor text. I certainly know from a technology standpoint they have the ability to quickly and easily figure out the "keywords" of a page or article - and considering how much anchor text matters you would be losing out if you didn't put the same amount of effort on making the entire site follow the same keyword themes.

    If you aren't looking to do churn and burn I think it's far better to err on the side of being too conservative versus being too greedy. Ideally you want even your Tier 2 links to have some "humanness" to them.
     
    mstchr likes this.
  8. Golan

    Golan Established Member

    Joined:
    Dec 10, 2015
    Messages:
    101
    Likes Received:
    84
    I almost entirely stopped scraping the net, only researching the drop lists now and trying to catch good ones.
     
    mstchr likes this.
  9. Bender Bending Rodríguez

    Bender Bending Rodríguez Senior Member

    Joined:
    Mar 10, 2016
    Messages:
    507
    Likes Received:
    338
    This. Google has probably mapped the whole Internet into topics using LSA years ago.
     
    mstchr likes this.
  10. n00b

    n00b Member

    Joined:
    May 24, 2016
    Messages:
    91
    Likes Received:
    10
    goodluck with your journey brother! =) lets kick this 2017 and earn and rank! =))
     
    mstchr likes this.
  11. mstchr

    mstchr Active Member

    Joined:
    Dec 10, 2015
    Messages:
    54
    Likes Received:
    69
    Scraping for the programmatically challenged
    or

    How I successfully scraped the web for a handful of decent expired domains

    I'm building a niche-specific network. The vast majority of links I build are extremely niche specific. People will disagree and I even disagree with myself on this. Whatever.


    So I'm after niche domains. Now I'm not a coder so programmers, please hold your noses. I'm a hack. I get by with whatever tools I know. Warning: dirty computer use ahead

    Step 1: Find directories
    In my case the niche is a location, which makes directories easy to find. So I grab a handful of directories.
    Let's say my niche is Michigan, or even something to do with Michigan. I google "Michigan business directory" and find this site. I want to scrape the domains from this directory.

    Step 2: Scrape categories

    I look at the page and find this:

    <div class="panel-heading">
    <h6 class="panel-title">
    <a data-toggle="collapse" data-parent="#accordionC" href="#collapseC10">
    Animals & Pet</a></h6> </div>
    <div id="collapseC10" class="panel-collapse collapse ">
    <div class="panel-body">
    <ul>
    lots of links to subcategories in <li> tags
    </ul>
    </div></div>
    <div class="panel-heading">
    <h6 class="panel-title">
    <a data-toggle="collapse" data-parent="#accordionC" href="#collapseC126">
    Apparel</a></h6> </div>
    <div id="collapseC126" class="panel-collapse collapse ">
    <div class="panel-body">
    <ul>
    lots of links to subcategories in <li> tags
    </ul>
    </div></div>
    <div class="panel-heading">
    <h6 class="panel-title">
    blah blah blah

    I need those subcategory links. Scrapebox custom data grabber: I'm going after the panel-body divs. Why? Because I suck at scraping and I know that whatever time I take analyzing the footprint trying to figure out how to tell scrapebox exactly what I want will be wasted becauseI have another way of getting the links I need.

    [​IMG]

    Blahblahblah set up the grabber, syntax is before_after=somebeforeshit|someaftershit, in this case before_after=<div class="panel-body">|</div>. This needs to be just right, not specific enough and you'll grab a bunch of shit you don't want and vice versa.

    [​IMG]

    Let's see how it goes, and I got 27 results, and there are 27 categories on the page so that's swell. Except since I'm a shitty scraper all I have is 27 ugly unordered lists with a bunch of relative links and empty space and shit. Not very useful. So I open it in a text editor and find/replace to create absolute urls that I can scrape. find href=' and replace with href='http://www.michiganbusiness.us/

    [​IMG]

    Extract the links here and it's time to get the page urls where our potential domains are.

    [​IMG]

    I go to one of these pages, how about this one and do the same thing, check the source code and what do I find but the same footprint as on the homepage. This is super sweet, this doesn't usually happen. I test it on my sample page to make sure. I'm expecting to get nine, maybe ten <ul> chunks back from scrapebox so I load my sample url in and run the grabber. I get ten back. If you look at the page there are nine categories and one below labeled 'online' which I didn't bother to check for a different footprint but it looks like it uses the same thing.

    OK I load up my list of 930something urls that I grabbed and run the scraper. For this I just ran it for 30 seconds, enough to get some results. Then I clean up that data and have a bunch of page URLs in the directory which potentially contain domain names. Now it's time to get the last footprint which in this case is completely fucked up. WTF kind of markup is this
    [​IMG]
    I'm going for the onMouseover bit to grab the domain then I'll clean it up. It's important on this last step, when you're grabbing the domains themselves in particular, to check a few pages and make sure the footprint is the same. Some directories switch it up and some are just coded like absolute dogshit, kinda how like I scrape for domains. So I figure out my footprint and feed it to scrapebox. before_after=onMouseover="window.status='|return true" That worked. I get something like this: http://www.example.com';. Good enough for me.

    When I get the domains back from this scrape I look to see if there are any irregularities and clean it up depending on what I see. In this case there are some lines with two domains listed, some with a directory or page at the end of the domain, etc. If it's substantial then I'll take a few minutes cleaning it up by trimming to root in scrapebox, find/replace, whatever.

    Part 3 - Checking
    Namebright for availability, then ahrefs/majestic for metrics.

    And that's it. Keeping in mind that I scraped probably 2% of what I could have from the site, this is what floated to the top:

    [​IMG]

    Nothing to write home about but that DR 38 with 21 referring IPs on 100 links has zero spam, a squeaky clean link profile and one nice authority link. If I was in this niche I would buy it. The way I see it, a clean niche-relevant DR 35-38 is just as good as some random off-topic 40-42. Just my opinion.

    The best I've found from scraping directories in my niche is a DR 43 with 200ish backlinks, a legit .edu link, zero linkbuilding, squeaky clean profile, reg'd for $8. Noice.

    Are these the best domains for PBNs? No. Of course not. The best ones are in the drop lists and with brokers. But there's definitely some things to be found here and there.

    A couple notes:

    Something I do often although not shown here is use spreadsheets to build lists for scraping. I'll build long lists of urls by pulling cells down and using CONCATENATE. For example if a directory's url structure is example.com/location/industry/p2.html then I'll build the url cell by cell in the top row, pull all the cells down making sure that p2 makes p3/p4/p5/etc., then =concatenate(a1,b1,c1,d1,etc.) to build the urls. Once that's done a simple find/replace will get you from 20 pages of example.com/location/industry/pagenumber.html to example.com/anotherlocation/anotherindustry/pagenumber.html very quickly. This is great for paginated directories.

    This process looks long but it's not. Once you get good with footprints you can set up the grabbers quickly enough. What I've been doing lately is writing out footprints for for an entire site starting from the top and going down to wherever level contains the domains, testing them to make sure they work, and then running a few instances of scrapebox overnight to scrape multiple sites at once. If I wasn't writing this post I could have scraped this entire Michigan directory in under 30 minutes, 99% of which is letting the scraper run.

    If you're new to scraping keep in mind that big sites with revenues and budgets can detect scrapers so be careful with your proxies/connections/etc. I'm currently very very pissed off that the best directory in my niche has cockblocked every proxy I've thrown at it after about 2 minutes of scraping, even with only one connection. Only thing I can think to do is add big delays between requests but it will take three weeks to scrape the directory if I do that. fuckers.

    Anyway I hope this helps someone out there, I love scraping and if I ever do learn to code it will be to scrape stuff.

    Edit: PS if any of you smart people have tips for thwarting anti-scraping stuff (or feel like running your CPUs for a scrape :p) please do tell.
     
    Last edited: Apr 9, 2017
    cardine and Golan like this.
  12. Golan

    Golan Established Member

    Joined:
    Dec 10, 2015
    Messages:
    101
    Likes Received:
    84
    Thanks for the tool, looks nice.