Knowing and understanding our traffic is paramount to our team.. and our reputation!
We are completely confident in our traffic quality, and are equally transparent with our advertisers as they have question or concerns.
In fact, our long standing reputation in the industry is due to our traffic quality. Gregg McNair has often preached this from the domain pulpit at domain-conferences, but we’ve also been able to prove this in action for more than 10 years by maintaining rank 1 Google ratings. Our long-term relationships with advertisers are also a fruit of our long standing credibility in the domain field.
Our traffic requirements are simple: we accept only domain name type-in traffic.
How do we ensure this?
Our first (and best) line of defense against ‘bad traffic’ is our client list. For 10 years, we’ve operated exclusively with large and well respected clients, and by invitation only. These are private domain name holders that are well known and respected in the industry. We know both them, and their domain names, well.
Each domain on our API is individually assessed and monitored. A decision is then made to allow or deny the traffic on our bidding platform. This assessment is done by researching the history of the domain, (we maintain a large database with 10 years of history on tens of millions of domains), comparing them to similar domains, and researching the individual IP’s and traffic source.
As you may know, even an honest domain name owner cannot control all of the traffic that comes to his domain. Many domain names have old links that bring traffic that are not actual humans, an example being a request for an image, or a monitoring program that has been configured to monitor the uptime of the domain. This is where our proprietary technology steps in. We provide a three level (selectable) filtering mechanism.
Some of our advertisers want to see all the traffic and do the filtering themselves. In this case, we simply send them every request. Our highest level filtering will analyze each IP address against databases that specialize in identifying hidden proxies and other questionable IPs, and filter those out.
We analyze the ‘user agent’ on the HTTP header and cross reference multiple databases with more than 1.2 million possible user agents that are known to recognize ‘bots’ or other non-human sources. Repeated visits can also be filtered out upon request.
Adult names are identified in our system by using Google results and a sophisticated proprietary in-house method, and our advertisers can choose to opt in or out of this traffic.
In summary, we start with honest and proven traffic, and filter it further to ensure the visit you are buying is a person sitting at his computer or mobile device. Sounds simple? It is!!!