• Dear forum reader,

    To actively participate on the forum by joining discussions or starting your own threads or topics, you need a game account and to REGISTER HERE!

Robots in the forum?

DeletedUser231

Guest
Total: 24 (members: 6, guests: 12, robots: 6)

6 robots in the forum.

Might be missing something here but what exactly is meant by this?
 

DeletedUser

Guest
It means:
NSA,GCHQ and friends are looking what we do ;) and of course search engines are looking too.
 

DeletedUser

Guest
f.e. search engines??????

What is that please?

It is a robot that looks into the internet and write it into a "telephone book" with the name google. Google shows all what in the internet is and search it with a automatic searching robot.
 

DeletedUser31

Guest
I think she meant the f.e. part.
I assume f.e. stands for for example.
But its just a wild guess.

On the other hand, Denali said what is a search engine :) They scan and index the content on pages to make them searchable. If they didn't do it you wouldn't be able to use google search here. If you typed "Elvenar" into Google Search it wouldn't give you any results.

Edit:
Not just Google search but many more. But I think that's the most popular so I used that in my example.
 

DeletedUser231

Guest
I think she meant the f.e. part.
I assume f.e. stands for for example.
But its just a wild guess.

On the other hand, Denali said what is a search engine :) They scan and index the content on pages to make them searchable. If they didn't do it you wouldn't be able to use google search here. If you typed "Elvenar" into Google Search it wouldn't give you any results.

Edit:
Not just Google search but many more. But I think that's the most popular so I used that in my example.

Thank you NOSHI

+1 for your help and explanation.
 

DeletedUser231

Guest
So another question...

I hear about a thing called a BOT...Is that a like a robot?
 

DeletedUser58

Guest
A bot's more like a script or an external application/code which runs on a page and does something it's been instructed to do. There are search bots, and there are other bots. A lot of games have bots which are used for cheating (they allow you to intercept and modify the game's data) as they can let you add resources for yourself or access some data you shouldn't be able to access, or they can be automatic clickers which know on which position to "click" so they make your gameplay easier. Do note these aren't allowed by the rules and Inno has ways of tracking such activities (so stay away from them if you want to keep playing).
 

DeletedUser231

Guest
A bot's more like a script or an external application/code which runs on a page and does something it's been instructed to do. There are search bots, and there are other bots. A lot of games have bots which are used for cheating (they allow you to intercept and modify the game's data) as they can let you add resources for yourself or access some data you shouldn't be able to access, or they can be automatic clickers which know on which position to "click" so they make your gameplay easier. Do note these aren't allowed by the rules and Inno has ways of tracking such activities (so stay away from them if you want to keep playing).

So if these illegal bots are bad why are so many sites on the web giving them away for free?

I have seen them all over the place. I have always been afraid of getting a virus or have my system contaminated. But I guess some players like to cheat and be top dog in a world without spending real monies. I have seen so many cheaters, multi-accounts etc. There is never a way of catching the smart ones, only the stupid ones....:p Same as in real life.
 

DeletedUser58

Guest
Some are actually useful and not necessarily bad (if they're used for something productive on your computer versus illegal use on a game). But I think mostly they're popular because people like winning easily nowadays, it's more appealing to have a bot do the work so you don't have to put in some effort. :p
 

DeletedUser231

Guest
YES...ran into it in TRIBAL WARS...Bots doing the farming for them. Got many players banned.
 

DeletedUser

Guest
I had it with Runescape, in my 'clan' we had one. That was about 4 1/2 years ago, good times, good times.
 

DeletedUser

Guest
Googlebot
Googlebot is Google's web crawling bot (sometimes also called a "spider"). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

We use a huge set of computers to fetch (or "crawl") billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Googlebot's crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links (SRC and HREF) on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

For webmasters: Googlebot and your site
How Googlebot accesses your site
For most sites, Googlebot shouldn't access your site more than once every few seconds on average. However, due to network delays, it's possible that the rate will appear to be slightly higher over short periods. In general, Googlebot should download only one copy of each page at a time. If you see that Googlebot is downloading a page multiple times, it's probably because the crawler was stopped and restarted.

Googlebot was designed to be distributed on several machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites they're indexing in the network. Therefore, your logs may show visits from several machines at google.com, all with the user-agent Googlebot. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server's bandwidth. Request a change in the crawl rate.

Blocking Googlebot from content on your site
It's almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log. Similarly, the web has many outdated and broken links. Whenever someone publishes an incorrect link to your site or fails to update links to reflect changes in your server, Googlebot will try to download an incorrect link from your site.

If you want to prevent Googlebot from crawling content on your site, you have a number of options, including using robots.txt to block access to files and directories on your server.

Once you've created your robots.txt file, there may be a small delay before Googlebot discovers your changes. If Googlebot is still crawling content you've blocked in robots.txt, check that the robots.txt is in the correct location. It must be in the top directory of the server (e.g., www.myhost.com/robots.txt); placing the file in a subdirectory won't have any effect.

If you just want to prevent the "file not found" error messages in your web server log, you can create an empty file named robots.txt. If you want to prevent Googlebot from following any links on a page of your site, you can use the nofollow meta tag. To prevent Googlebot from following an individual link, add the rel="nofollow" attribute to the link itself.

Here are some additional tips:

  • Test that your robots.txt is working as expected. The Test robots.txt tool on the Blocked URLs page (under Health) lets you see exactly how Googlebot will interpret the contents of your robots.txt file. The Google user-agent is (appropriately enough) Googlebot.
  • The Fetch as Google tool in Search Console helps you understand exactly how your site appears to Googlebot. This can be very useful when troubleshooting problems with your site's content or discoverability in search results.
Making sure your site is crawlable
Googlebot discovers sites by following links from page to page. The Crawl errors page in Search Console lists any problems Googlebot found when crawling your site. We recommend reviewing these crawl errors regularly to identify any problems with your site.

If you're running an AJAX application with content that you'd like to appear in search results, we recommend reviewing our proposal on making AJAX-based content crawlable and indexable.

If your robots.txt file is working as expected, but your site isn't getting traffic, here are some possible reasons why your content is not performing well in search.

Problems with spammers and other user-agents
The IP addresses used by Googlebot change from time to time. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot). You can verify that a bot accessing your server really is Googlebot by using a reverse DNS lookup.

Googlebot and all respectable search engine bots will respect the directives in robots.txt, but some nogoodniks and spammers do not. Report spam to Google.

Google has several other user-agents, including Feedfetcher (user-agent Feedfetcher-Google). Since Feedfetcher requests come from explicit action by human users who have added the feeds to their Google home page or to Google Reader, and not from automated crawlers, Feedfetcher does not follow robots.txt guidelines. You can prevent Feedfetcher from crawling your site by configuring your server to serve a 404, 410, or other error status message to user-agent Feedfetcher-Google. More information about Feedfetcher.
 
Top