Hi everyone,
I am trying to use curl to download yahoo finance historical data, in
order see how I can include it in a quick script I am writing.
The URL I want to download is this:
http://real-chart.finance.yahoo.com/table.csv?s=JNJ&a=08&b=10&c=2010&d=08&e=10&f=2011&g=w&ignore=.csv
It downloads the closing price of ticker symbol JNJ on a weekly basis
from September 2010 to September 2011.
The URL is not a problem, it is the use of curl that I can’t get an
understanding of. I want to be able to type a “curl” command on the
command line and have this URL downloaded into the same directory I am
already in on the command line, and be able to specify the filename.
Any help here? This should be pretty simple for someone who understands
curl.
–
G.O.
Box #1: 13.1 | KDE 4.12 | AMD Phenom IIX4 | 64 | 16GB
Box #2: 13.1 | KDE 4.12 | AMD Athlon X3 | 64 | 4GB
Laptop: 13.1 | KDE 4.12 | Core i7-2620M | 64 | 8GB
On 09/16/2014 01:36 PM, gogalthorp wrote:
>
> Did you try man curl??
>
>
Yes, but in reading it, I feel like I would have to have a master’s
degree in internet protocols and all the various things with html, URLs,
etc. in order to understand from the man page how to get what I want.
I am not completely committed to using the “curl” command. If there is a
better command to use on the command line than curl, I would rather use
that.
–
G.O.
Box #1: 13.1 | KDE 4.12 | AMD Phenom IIX4 | 64 | 16GB
Box #2: 13.1 | KDE 4.12 | AMD Athlon X3 | 64 | 4GB
Laptop: 13.1 | KDE 4.12 | Core i7-2620M | 64 | 8GB
On 09/16/2014 02:05 PM, golson765 wrote:
> On 09/16/2014 01:36 PM, gogalthorp wrote:
>>
>> Did you try man curl??
>>
>>
>
> Yes, but in reading it, I feel like I would have to have a master’s
> degree in internet protocols and all the various things with html, URLs,
> etc. in order to understand from the man page how to get what I want.
>
> I am not completely committed to using the “curl” command. If there is a
> better command to use on the command line than curl, I would rather use
> that.
>
I tried using “wget” like this:
> wget
http://real-chart.finance.yahoo.com/table.csv?s=JNJ&a=08&b=10&c=2010&d=08&e=10&f=2011&g=w&ignore=.csv
[1] 23146
[2] 23147
[3] 23148
[4] 23149
[5] 23150
[6] 23151
[7] 23152
[8] 23153
[2] Done a=08
[3] Done b=10
[4] Done c=2010
[5] Done d=08
[6] Done e=10
[7]- Done f=2011
[8]+ Done g=w
george@tribetreklap:~/Documents/finance/Research/CarrRelStr/Sectors/testtables>
--2014-09-16 15:04:33-- http://real-chart.finance.yahoo.com/table.csv?s=JNJ
Resolving real-chart.finance.yahoo.com (real-chart.finance.yahoo.com)...
206.190.36.54
Connecting to real-chart.finance.yahoo.com
(real-chart.finance.yahoo.com)|206.190.36.54|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/csv]
Saving to: ‘table.csv?s=JNJ’
<=>
] 551,250 1.35MB/s in 0.4s
2014-09-16 15:04:34 (1.35 MB/s) - ‘table.csv?s=JNJ’ saved [551250]
^C
[1]+ Done wget
http://real-chart.finance.yahoo.com/table.csv?s=JNJ
george@tribetreklap:~/Documents/finance/Research/CarrRelStr/Sectors/testtables>
ls
table.csv?s=JNJ
>
This downloads a file, but it does not give me the same file that I
would get if I paste the URL in question into the command line of my
browser. This gives me the entire daily historical data back to 1970 for
the stock ticker JNJ, while if I paste the exact same URL into my
browser, I get the weekly historical data for Sept 2010 through to 2011.
So I am at a loss as to what to do. I need the weekly historical data
for a specified time period, and I need to be able to do it in a shell.
Any help would be very much appreciated. It doesn’t matter to me if I
use curl, wget, or any other command, as long as I can do it in a shell
and retrieve the correct information.
–
G.O.
Box #1: 13.1 | KDE 4.12 | AMD Phenom IIX4 | 64 | 16GB
Box #2: 13.1 | KDE 4.12 | AMD Athlon X3 | 64 | 4GB
Laptop: 13.1 | KDE 4.12 | Core i7-2620M | 64 | 8GB
On Tue 16 Sep 2014 08:15:29 PM CDT, golson765 wrote:
On 09/16/2014 02:05 PM, golson765 wrote:
> On 09/16/2014 01:36 PM, gogalthorp wrote:
>>
>> Did you try man curl??
>>
>>
>
> Yes, but in reading it, I feel like I would have to have a master’s
> degree in internet protocols and all the various things with html,
> URLs, etc. in order to understand from the man page how to get what I
> want.
>
> I am not completely committed to using the “curl” command. If there
> is a better command to use on the command line than curl, I would
> rather use that.
>
I tried using “wget” like this:
wget
http://real-chart.finance.yahoo.com/table.csv?s=JNJ&a=08&b=10&c=2010&d=08&e=10&f=2011&g=w&ignore=.csv
[1] 23146
[2] 23147
[3] 23148
[4] 23149
[5] 23150
[6] 23151
[7] 23152
[8] 23153
[2] Done a=08
[3] Done b=10
[4] Done c=2010
[5] Done d=08
[6] Done e=10
[7]- Done f=2011
[8]+ Done g=w
george@tribetreklap:~/Documents/finance/Research/CarrRelStr/Sectors/testtables>
–2014-09-16 15:04:33–
http://real-chart.finance.yahoo.com/table.csv?s=JNJ Resolving
real-chart.finance.yahoo.com (real-chart.finance.yahoo.com)…
206.190.36.54 Connecting to real-chart.finance.yahoo.com
(real-chart.finance.yahoo.com)|206.190.36.54|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [text/csv]
Saving to: ‘table.csv?s=JNJ’
<=>
] 551,250 1.35MB/s in
0.4s
2014-09-16 15:04:34 (1.35 MB/s) - ‘table.csv?s=JNJ’ saved [551250]
^C
[1]+ Done wget
http://real-chart.finance.yahoo.com/table.csv?s=JNJ
george@tribetreklap:~/Documents/finance/Research/CarrRelStr/Sectors/testtables>
ls
table.csv?s=JNJ
This downloads a file, but it does not give me the same file that I
would get if I paste the URL in question into the command line of my
browser. This gives me the entire daily historical data back to 1970
for the stock ticker JNJ, while if I paste the exact same URL into my
browser, I get the weekly historical data for Sept 2010 through to 2011.
So I am at a loss as to what to do. I need the weekly historical data
for a specified time period, and I need to be able to do it in a shell.
Any help would be very much appreciated. It doesn’t matter to me if I
use curl, wget, or any other command, as long as I can do it in a shell
and retrieve the correct information.
Hi
You didn’t change the date in your example, it says 2011…
wget -q -O JNJ_`date +\%m%d%y%H%M%S`.csv http://real-chart.finance.yahoo.com/table.csv?s=JNJ&a=08&b=10&c=2010&d=08&e=10&f=2014&g=w&ignore=.csv
You could script it to add the variables as options, eg stock name
start finish dates etc…
–
Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.1 Kernel 3.11.10-21-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
On 2014-09-16 22:15, golson765 wrote:
> I tried using “wget” like this:
>
>> wget
> http://real-chart.finance.yahoo.com/table.csv?s=JNJ&a=08&b=10&c=2010&d=08&e=10&f=2011&g=w&ignore=.csv
Try «wget "http:/...."» instead. I do get a valid csv file, but with
wrong name.
--
Cheers / Saludos,
Carlos E. R.
(from 13.1 x86_64 "Bottle" at Telcontar)
On 09/16/2014 04:25 PM, Carlos E. R. wrote:
> On 2014-09-16 22:15, golson765 wrote:
>
>> I tried using “wget” like this:
>>
>>> wget
>> http://real-chart.finance.yahoo.com/table.csv?s=JNJ&a=08&b=10&c=2010&d=08&e=10&f=2011&g=w&ignore=.csv
>
> Try «wget "http:/...."» instead. I do get a valid csv file, but with
> wrong name.
>
Thanks, that is what worked, putting the double quotes around the URL.
Great! I also added the -q and -O options that Malcolm suggested to make
it have the right filename. Thank you!
--
G.O.
Box #1: 13.1 | KDE 4.12 | AMD Phenom IIX4 | 64 | 16GB
Box #2: 13.1 | KDE 4.12 | AMD Athlon X3 | 64 | 4GB
Laptop: 13.1 | KDE 4.12 | Core i7-2620M | 64 | 8GB