Susi AI is an intelligent Open Source personal assistant. It is capable of chat and voice interaction by using APIS to perform actions such as music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information. Additional functionalities can be added as console services using external APIs. Susi AI is able to answer questions and depending on the context will ask for additional information in order to perform the desired outcome. The core of the assistant is the Susi AI server that holds the “intelligence” and “personality” of Susi AI. The Android and web applications make use of the APIs to access information from a hosted server.
We use Travis CI to build and test the JAVA application and make sure that we don’t break in functionality. Typically Travis CI is free for open source however we have experienced a bit of shortage to run the tests and compile the app successfully. It took about 5 minutes to build and run tests and then only could we merge PRs and make sure our code works. We wanted to decrease the time by a bit more to make sure we can have faster assurances that the code works. WE been thinking into a solution and discovered that we could parallelize the build by splitting a single Travis CI instance into multiple isolated instances. We split down the tests and build into one instance and the code coverage tests onto another and was able to get a drop of ~2 minutes to the build tile. Here’s how the .travis.yml looks:
|# Sudo-enabled Ubuntu Trusty VM
|– secure: DbveaxDMtEP+/Er6ktKCP+P42uDU8xXWRBlVGaqVNU3muaRmmZtj8ngAARxfzY0f9amlJlCavqkEIAumQl9BYKPWIra28ylsLNbzAoCIi8alf9WLgddKwVWsTcZo9+UYocuY6UivJVkofycfFJ1blw/83dWMG0/TiW6s/SrwoDw=
|# Build, test and deploy to Google Cloud and Heroku
|– rm -f $HOME/.gradle/caches/modules-2/modules-2.lock
|– rm -fr $HOME/.gradle/caches/*/plugin-resolution/
|– if [ -e ./gradlew ]; then ./gradlew test jacocoTestReport;else gradle test jacocoTestReport;fi
|– bash kubernetes/travis/deploy.sh
|– bash kubernetes/travis/deploy-master.sh
|before_script: pip install –user codecov
By using this configuration this is what we do,
We deploy the app to heroku using a secreat configuration on the next instance we run codecov to run the tests.
By this method we were able to get a sharp drop in test times and get data around the corner of our builds.
if you are intrested in susi server you can get it:
Where can I download ready-built releases of Susi AI?
No-where, you must clone the git repository of Susi AI and built it yourself. That’s easy, just do
You can alos deploy it on Google cloud, heroku, docker, cloud9, eclipse etc.
I have done all this work in love fro our community fossasia:
FOSSASIA develops Open Source software and hardware for conversational AIs, science and event management with a global developer community from its base in Asia. The organization organizes Open Technology events, runs coding programs and the #Codeheat development contest. The annual FOSSASIA Summit is the premier Open Technology event in Asia for developers, contributors, start-ups, and technology companies. FOSSASIA was founded in 2009 by Mario Behling and Hong Phuc Dang.
Link to my issue: https://github.com/fossasia/susi_server/issues/632
Link to the PR: https://github.com/fossasia/susi_server/pull/635
That’s hope good things from you!