The first thing I did was write the code to use the Twitter API to get the information I wanted. I had never worked with JSON before but after a little trial and error I was able to open the search page and get the user, the text and the tweet url. I am searching for "Python Programming" in the code below.
import urllib2
import json
search_term = 'python+programming'
search_count= '25'
twitter_search = 'http://search.twitter.com/search.json?q=' + search_term + '&rpp=' + search_count + '&result_type=mixed&lang=en'
response = urllib2.urlopen(twitter_search)
json_feed = json.load(response)
parent = json_feed["results"]
for item in parent:
print item["from_user"]
print item["text"]
print https://twitter.com/' + item["from_user"] + '/status/' + str(item["id"])
The Model
from django.db import models
class PythonTweets(models.Model):
tweetUsr = models.CharField(max_length=200)
tweetText = models.CharField(max_length=500)
tweetUrl = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
The View
The view was pretty straight forward and mostly what was learned in the Polls tutorial.
# Create your views here.
from django.shortcuts import render_to_response
from tweeeter.models import PythonTweets
def python_search(request):
latest_python_tweets = PythonTweets.objects.all().order_by('-pub_date')
return render_to_response('tweeeter/python.html', {'latest_python_tweets': latest_python_tweets})
The Template
The template is just a test page to show the tweets. I will pretty it up in a future blog post.
<h1>Test Python Page</h1>
{% if latest_python_tweets %}
<ul>
{% for ptweet in latest_python_tweets%}
<li><a href="{{ ptweet.tweetUrl }}/">{{ ptweet.tweetText}}</a></li>
{% endfor %}
</ul>
{% else %}
<p>No Tweets Available</p>
{% endif %}
Adding the tweets to the database
Since I write PL/SQL and SQL for a living I thought I would just write an SQL script and run it as a cron job. I actually wrote a little bit of this and then face palm. This is OOP and not SQL. The objects attributes live in the database, all I have to do is instantiate new objects. So all I have to do is loop through each tweet and create a new PythonTweet object. Way too easy! My plan now is to cron the script below to run once an hour or so to make sure I stay within the Twitter limits. I still have to add some clean up to delete data from the database when it gets to a certain age.
import urllib2
import json
from tweeeter.models import PythonTweets
from time import gmtime, strftime
todays_date = strftime('%Y-%m-%d %H:%M:%S', gmtime())
#change below to customize the results
search_term = 'python+programming'
search_count= '25'
twitter_search = 'http://search.twitter.com/search.json?q=' + search_term + '&rpp=' + search_count + '&result_type=mixed&lang=en'
#open twitter and get what you want.
response = urllib2.urlopen(twitter_search)
json_feed = json.load(response)
#cycle through tweets, create objects and save.
parent = json_feed["results"]
for item in parent:
print item
rds = PythonTweets(
tweetUsr = (item["from_user"]),
tweetText = (item["text"]),
tweetUrl = ('https://twitter.com/' + item["from_user"] + '/status/' + str(item["id"])),
pub_date = todays_date)
rds.save()
What Else?
I had to add the new application to settings.py and update both urls.py. If you need to see the code you can find it on my github account. My next plan of attack on this is to add the cleanup of old data to the object creation script and pretty up the template.
No comments:
Post a Comment