I have no websites blocked in my config file,
whenever i do
'r = http.get('http://translate.google.com')'
r always returns nil
when i do
'r = http.get('http://google.com')'
r is not nil
when i do
'r = http.get('http://yahoo.com')'
r is not nil
What is going on here?
1
http doesnt work with some websites
Started by AndreWalia, Jan 21 2015 12:21 AM
6 replies to this topic
#1
Posted 21 January 2015 - 12:21 AM
#2
Posted 21 January 2015 - 03:37 PM
Most subdomains in Google are https:// domains. Try using that, and see if that fixes it.
#3
Posted 15 February 2015 - 03:31 AM
Sorry for super late reply... I was in India.
I have tried everything from https:// to http:// to www. to http://www. to https://www. and r always returns nil...
I have tried everything from https:// to http:// to www. to http://www. to https://www. and r always returns nil...
#4
Posted 15 February 2015 - 04:51 AM
The translate subdomain on Google may disallow java user agents in an effort to prevent automated tools from scraping the translation system.
#5
Posted 15 February 2015 - 07:05 AM
It also happens with computercraft.info
#6
Posted 15 February 2015 - 08:54 AM
What does the CC config look like? You may be allowing the sites incorrectly.
#7
Posted 15 February 2015 - 11:50 PM
Some sites require you to change your user agent, I suppose because of what Lyqyd said. This includes computercraft.info.
While it's not really a bug, it would be nice if the default user agent were something other than Java to prevent issues like this.
local req = http.get("http://computercraft.info") print(type(req)) --# nil req = http.get("http://computercraft.info", { ["User-Agent"] = "something" }) print(type(req)) --# Works! (table) req.close()
While it's not really a bug, it would be nice if the default user agent were something other than Java to prevent issues like this.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users