where to get sample ofx files for testing? - ofx

I am building a php application using Ofx Parser Class from http://www.phpclasses.org/package/5778-PHP-Parse-and-extract-financial-records-from-OFX-files.html . But where can i get a sample ofx file to use this class and test my application?

Try searching "filetype:ofx" in google. I have found a couple there. If you need a whole bunch for a more complete test I don't know.

Easiest by far is to have an online bank account yourself that supports ofx downloads. But you're right; it's surprisingly difficult to find anything past a simplest case online.
I dug up this article on IBM developerWorks that includes a quick sample. It's on parsing ofx with php and helpfully shows the difference between a well-formed XML version of an ofx and the starting-tag only version you'll often find when you download from various banks, but even this sample is only one withdrawal and one deposit.

These are the two references I used. The first one is about the structure of and ofx file and the second one give you the connection information for the financial institutions.

Try using https://github.com/wesabe/fixofx. It has a script called fakeofx.py
The fakeofx.py script generates real-ish-seeming OFX for testing and
demo purposes. You can generate a few fake OFX files using the script,
and upload them to Wesabe to try it out or demonstrate it without
showing your real account data to anyone.
The script uses some real demographic data to make the fake
transactions it lists look real, but otherwise it isn't at all
sophisticated. It will randomly choose to generate a checking or
credit card statement and has no options.


Connecting To A Website To Look Up A Word(Compiling Mass Data/Webcrawler)

I am currently developing a Word-Completion application in C# and after getting the UI up and running, keyboard hooks set, and other things of that nature, I came to the realization that I need a WordList. The only issue is, I cant seem to find one with the appropriate information. I also don't want to spend an entire week formatting and gathering a WordList by hand.
The information I want is something like "TheWord, The definition, verb/etc."
So, it hit me. Why not download a basic word list with nothing but words(Already did this; there are about 109,523 words), write a program that iterates through every word, connects to the internet, retrieves the data(definition etc) from some arbitrary site, and creates XML data from said information. It could be 100% automated, and I would only have to wait for maybe an hour depending on my internet connection speed.
This however, brought me to a few questions.
How should I connect to a site to look up these words? << This my actual question.
How would I read this information from the website?
Would I piss off my ISP or the website for that matter?
Is this a really bad idea? Lol.
How do you guys think I should go about this?
Someone noticed that Dictionary.com uses the word as a suffix in the url. This will make it easy to iterate through the word file. I also see that the webpage is stored in XHTML(Or maybe just HTML). Here is the source for the Word "Cat". http://pastebin.com/hjZj6AC1
For what you marked as your actual question - you just need to download the data from the website and find what you need.
A great tool for this is CsQuery which allows you to use jquery selectors.
You could do something like this:
var dom = CQ.CreateFromUrl("http://www.jquery.com");
string definition = dom.Select(".definitionDiv").Text();

Are libraries just new sets of commands [closed]

I am new to programming and I have some questions concerning the libraries.(Please excuse me if some questions seem foolish.)
- First of all when i searched what libraries are i was told they arereusable code.But when i googled e.g. how to develop a website with C# I found that I need a library.Is this library just a set of commands i can use?Or can it for example allow you to visually create a website?
- What's more, in a game designing software I saw it needed C# code for the character movement speed.This means that the software has its own library and needs me to learn new commands?Any additional information would be nice.
Thanks in advance...
A Library contains certain functionality that you want to add to your application. In an extemely simplified example: If you want your application to be able to read and write text files to your computer, you need to import a library tha was written to help you read and write files.
You, of course, don't have to use this library, you could write your own. But that would be a waste of time because somebody already created one with easy to use functions like "hey, go get this file" or "hey, write this to a file named FooBar.txt"
Its pre-compiled and usually works as expected because thousands other people have used it before you and tested it.
To harken back to the days before computers were in refrigerators, think of each library as a sort of secretary or clerk, with a very specific skill set. So you have:
a stenographer library
a typist library
a out-going-faxes clerk
a receptionist
a A/P clerk, etc.
You as the manager of the office (the project), don't want to worry about the details of each task, so you delegate each one to the appropriate clerk or secretary type, and worry only about the higher functions of the business.
A library is more-or-less, just a compiled piece of code; or, a .dll. In order to use it in your project, you would add a reference to the assembly (or .dll) and then access it with a using statement in your class (or where ever you are using it). Something like this:
using System.Data.SqlClient;
There are several options for you to build a website within the .NET stack - like, ASP.NET/MVC.
As to your question about the gaming space, the XNA framework has almost all that you would need to get started making games.

csharp flat file database for a standalone windows application

A question like this has been asked differently several times.
Yet here i am.
I am writing a standalone windows program which will get user input like three fields and have to store it in the disk.
Also i need to delete them, edit them and so on.
It should be UTF8.
Besides here is the actual need.
I have hosted this application in my server and users can download it.
I want the db automatically created when the program is executed for the first time.
What i mean is user can or should download only one file and that is the program.
The program will be one exe file and it will not have any dependencies.
meanwhile asking this question i tried sqllite for .net 2.0 and i got an installer from sourceforge and installed it.
I included in my application and it showed an error like there is a problem in that.
So, if people suggest sqlite then please give me reference of how to include in c#.net v2.0
I am new to .net so it is taking a very long time to fit things together so thought of posting this question.
any comments, suggestions, advices and references would do good.
I have attached the error what i got
Edit after first reply
A user can save as many as set of three fields.
I mentioned three fields for an example.
They will save as many as records as they want.
It could 100 to infinity.
If that's only 3 fields to store - forget about databases and store the data in an XML file.
You can create a class that has these 3 properties, and then serialize/deserialize it on demand.
Here's a nice tutorial from Microsoft about XML serialization: http://support.microsoft.com/kb/815813
Deserialization is done in a very similar fashion.

Calculation on FPML

I am new to FPML, and our system is new to swap trading and handling. The FPML examples show that there are lot of fields where we can also enter the formulas for its calculations. We are saving these FPML xmls with the information directly into our system. I have been looking for tools that could help with the whole process of incorporating FPML, and have found tools like handcoded that help in validation of the XML. But am unable to find one that could help ease the process of calculations, something that completes the XML before it enters the system.
You still shown as student and that why I assume its part of your homework. Thats why I like to give you a starting point.
Look at this posting and use the tag #FpML #Python for some more information.
And it would be helpful to see the part of source code you speak about. I assume you speak about fpml-valuation-xsd and related xml.

What is the easiest way to programmatically extract structured data from a bunch of web pages?

What is the easiest way to programmatically extract structured data from a bunch of web pages?
I am currently using an Adobe AIR program I have written to follow the links on one page and grab a section of data off of the subsequent pages. This actually works fine, and for programmers I think this(or other languages) provides a reasonable approach, to be written on a case by case basis. Maybe there is a specific language or library that allows a programmer to do this very quickly, and if so I would be interested in knowing what they are.
Also do any tools exist which would allow a non-programmer, like a customer support rep or someone in charge of data acquisition, to extract structured data from web pages without the need to do a bunch of copy and paste?
If you do a search on Stackoverflow for WWW::Mechanize & pQuery you will see many examples using these Perl CPAN modules.
However because you have mentioned "non-programmer" then perhaps Web::Scraper CPAN module maybe more appropriate? Its more DSL like and so perhaps easier for "non-programmer" to pick up.
Here is an example from the documentation for retrieving tweets from Twitter:
use URI;
use Web::Scraper;
my $tweets = scraper {
process "li.status", "tweets[]" => scraper {
process ".entry-content", body => 'TEXT';
process ".entry-date", when => 'TEXT';
process 'a[rel="bookmark"]', link => '#href';
my $res = $tweets->scrape( URI->new("http://twitter.com/miyagawa") );
for my $tweet (#{$res->{tweets}}) {
print "$tweet->{body} $tweet->{when} (link: $tweet->{link})\n";
I found YQL to be very powerful and useful for this sort of thing. You can select any web page from the internet and it will make it valid and then allow you to use XPATH to query sections of it. You can output it as XML or JSON ready for loading into another script/ application.
I wrote up my first experiment with it here:
Since then YQL has become more powerful with the addition of the EXECUTE keyword which allows you to write your own logic in javascript and run this on Yahoo!s servers before returning the data to you.
A more detailed writeup of YQL is here.
You could create a datatable for YQL to get at the basics of the information you are trying to grab and then the person in charge of data acquisition could write very simple queries (in a DSL which is prettymuch english) against that table. It would be easier for them than "proper programming" at least...
There is Sprog, which lets you graphically build processes out of parts (Get URL -> Process HTML Table -> Write File), and you can put Perl code in any stage of the process, or write your own parts for non-programmer use. It looks a bit abandoned, but still works well.
I use a combination of Ruby with hpricot and watir gets the job done very efficiently
If you don't mind it taking over your computer, and you happen to need javasript support, WatiN is a pretty damn good browsing tool. Written in C#, it has been very reliable for me in the past, providing a nice browser-independent wrapper for running through and getting text from pages.
Are commercial tools viable answers? If so check out http://screen-scraper.com/ it is super easy to setup and use to scrape websites. They have free version which is actually fairly complete. And no, I am not affiliated with the company :)