Android is now almost everywhere in the tech world. Whether you look at some smartphones or tablets, or smartwatches or even in-dash entertainment systems, Android is available behind the scenes to deliver a smarter experience. One of the reasons for its huge success is the ease of its development. Presently, Google Play Store holds over 2.2 million apps for Android devices. This data is enough to make Google’s platform the leading mobile operating system, followed by Apple’s iOS and Microsoft’s Windows Phone. Also, this gives you a reason to opt for Android development — without any second thought.
Therefore, we are here covering five tools that will help you transform into an Android developer. These tools will enable you to create some big apps for the open source platform and gain popularity through your developments.
Android Studio is the official integrated development environment (IDE) for Android that Google launched back in 2013. It has Virtual Device Manager to enable live testing experience without the need for any physical device. But if you’ve connected to an Android device, you can debug your apps instantly using the Android Virtual Device Manager.
To offer a multi-platform experience, Google has designed its Java-powered Android Studio for Windows, Mac OS X and Linux. It supports Android Wear app development in addition to native Android apps. Further, there is an option to integration Google Cloud Messaging and App Engine using the built-in Google Cloud Platform access to uplift the end-user experience.
Android NDK (Native Developer Kit) offers a set of tools that allow you to leverage C and C++ code for developing Android apps. The tools come with some existing prebuilt libraries to let you reuse the previous codes and make some innovating offerings.
Google’s aim with the release of Android NDK is to influence C and C++ developers to enhance the number of Android apps. Thus, you don’t need to be a master of Java to develop something for Android users. You can also utilise some third-party solutions to integrate the NDK into Eclipse IDE or Microsoft’s Visual Studio.
If you want to develop an app with 3D experience, you can use Unity3D. This tool offers you a bunch of proprietary APIs that help in enabling attractive interfaces for a variety of Android devices. Additionally, you can utilise some high-end features of the engine to develop 3D mobile games.
Written with codes in C and C++, Unity3D allows you to develop apps and games in C#. You can additionally take help from some photo-editing software like Adobe Photoshop or Autodesk Maya and design textures as well as shadows for your creations.
Qt for Android
For programmers with the knowledge of C++ and QML, the Qt Company has designed Qt for Android. This framework allows you to develop Qt 5 apps on devices with Android 4.1 (API level 16) or above. Additionally, it provides you with some simple solutions to handle multimedia content and extract readings from various sensors.
Once the coding work is accomplished successfully, Qt for Android lets you create and deploy Android’s Application Package (APK) through Qt Creator. The framework requires Android Studio and NDK to accomplish your developments.
Microsoft is ironically supporting Android app development using Xamarin. This is one of the youngest tools and perhaps the easiest one to develop not just Android but also new iOS and Windows apps.
Xamarin uses C# language to design your Android apps. Microsoft recently announced the open source availability of Xamarin SDKs that include Mono runtime ports and command-line build tools to support the community. More than a million developers are already using Xamarin around the world. This makes sense why you should opt for this solution.
So these were the tools that you can use today to fillip your presence in the ever-growing world of Android. However, you still need some knowledge about how to code to kick-start your developments.
The open source smart home space is starting to make sense. While Nest, SmartThings, and others represent the big heavy hitters in the field, a number of smaller, open source solutions are helping show exactly what the home automation field is capable of. And it doesn’t just have to be controlling your thermostat. Here are five open source home automation projects we love.
BREW YOUR OWN SOLUTION
For many longtime tech fans, the phrase “homebrew” brings to mind the Homebrew Computer Club: the legendary San Francisco computer club that included Steve Jobs and Steve Wozniak among its early members.
But for software engineer Dan Watford it means something entirely different: homebrewed ales. Having recently launched into the world of homebrews, Watford decided to come up with a smart solution to deal with having batches of alcohol ruined by inconsistent temperatures.
In response he started by rigging up a DS18B20 temperature sensor to a Raspberry Pi to create an intelligent way of monitoring temperature: not just from moment to moment, but in a way that visualizes the info over time.
“Scratching your own itch was always the mantra of the open source movement, and that very much carries over to the home automation space,” Watford says. “Whatever the particular problem that you want to solve is, you don’t have to go out and find an off-the-shelf solution. It can be time-consuming, but it’s absolutely liberating.”
Having created a solution that allows him to more successfully brew ale from the comfort of his own home, Watford is now working on a way to carry out his brewing from a distance. Since he works away four nights out of every week, he’s come up with a solution that will let him control his brewing using his smartphone.
“It’s good to know what the temperature is, but I want to be able to control it as well,” he says. “To do this I’ve bought several EnerGenie power sockets, which offer a remote control module you can attach to the Raspberry Pi. My hope is that I’ll be able to create a solution that can look at what the temperature is and then operate a heater if necessary.”
That doesn’t make the second one any less useful, however.
“In the area of home automation, there are many small and some large problems that can be solved with existing technology,” says project creator Eric Tsai. “It’s actually really difficult to think of a problem that can’t be solved with a sensor and a microcontroller. The availability of cheap electronics today means that even when a commercial product exists, the equivalent DIY solution can be less expensive and more capable.”
Tsai’s poop-tracking project started out as a team competition at his local Arduino user group. Using a $35 Raspberry Pi, an Arduino, and a commonly available GPS module, he build a GPS device that attaches to his dog’s collar. The device senses when his dog goes to the toilet, by using the “tilt switch” signal to work out when the dog is holding the “poop” position for several seconds. At this point a Google Map of the poop location is automatically generated, and which can be viewed on a smartphone. A short audio clip also plays inside Tsai’s home to let him know his pet has done its business.
That’s not the only canine-related home automation project Tsai’s built. His GPS collar enables him to find his dog if it wanders off, while a temperature gauge fills him in on if his pet is too hot or too cold.
“I also have a dog barking sensor that’ll email me if my dog is barking in his kennel, and quantifies the amount of barking,” he says. “Most of the devices I build cost less than $20 in parts, and most of the hardware and software is open source and readily available. Since temperature sensors are only a couple bucks, I choose to add one to practically every device I build just in case.”
WATER WAY TO GO
When former Motorola and aerospace engineer John Schuch had one of the irrigation pipes on his property burst underground, he was shocked to receive a monthly water bill for $8,000. The experience prompted him to begin searching for a smart device that could inform him in real time of how much water he was using.
“You’d think that would be fairly easy to find, right?” he says. “I couldn’t get one anywhere. There’s a huge push to conserve water in the United States, but in many cases users are precluded from knowing exactly how much water they’ve used over any given period of time.”
Schuch’s homemade solution had to fit three criteria: The system needed to be inexpensive, it had to be easy and non-invasive to install, and it needed to be straightforward to operate. The solution he devised meant connecting a three-axis magnetometer—which is used as a compass in smartphones and tablets—to the outside of his water meter. Since there is a magnet inside every water meter, the magnetometer was able to track just how much water was moving through the meter at any moment.
From there a signal is sent—via the cloud—to a display in Schuch’s home, where it is tabulated and the information clearly presented. Making the device “smart” is the fact that Schuch has added trigger elements, which means that emails and push notifications can be sent out when certain thresholds are crossed. For example, if water consumption goes over 1,000 gallons in a 24-hour period, Schuch is alerted that there is likely a problem with their water system.
“If you were in an area where there were strict water limitations, it would be possible for the system to watch how much water you use and shut down certain elements—for example, cutting your irrigation back to half the normal amount,” he continues. “You could keep the plants alive, but shut down the system that keeps your swimming pool topped up.”
So if you’re interested in creating a solution like one of the above, where do you go? Thankfully, homespun home automation systems don’t just exist for one use case. We’re also witnessing the rise of open source home automation hubs to match the likes of Apple’s HomeKit.
By far the best known of these is OpenHAB, an attempt to establish common interfaces and APIs in a smart home space that is still heavily fragmented. Originally created by software engineer Kai Kreuzer, OpenHAB has advanced to become the most mature open source framework out there—with a passionate and growing community behind it.
“Five years ago I built myself a home,” Kreuzer says. “I wanted a home automation solution that would be fine for the next decades, rather than just a few months. “There were commercial integration systems available at the time, but I wasn’t happy to bet on just one horse. It would have made me reliant on one manufacturer or ecosystem. The only solution was to open source.”
Coding in Java and OSGi, OpenHAB was the result. It offers users a vendor- and technology-agnostic open source solution that is easy to maintain and simply extendable. That means that not only can you use OpenHAB with practically any IoT device, you can also control the end product with both dedicated iOS and Android apps, along with any computer capable of running a JVM—making Linux, Mac, or Windows all possibilities.
On top of OpenHAB’s other advantages is the fact that it can work offline—meaning that you don’t have to upload your home automation data to the cloud. “Instead of calling OpenHAB part of the Internet of Things, I like to refer to it as the Intranet of Things,” Kreuzer continues. “Your devices should interact with each other, but the data doesn’t have to leave your home.”
Newer than OpenHAB is Home Assistant, a Python 3-based home automation hub created by software engineer Paulus Schoutsen. Originally a way for Schoutsen to teach himself the Python programming language, Home Assistant has evolved into an exciting home automation platform in its own right. If getting involved with a less mature Python-based project appeals to you, it’s well worth checking out.
“It started out with me just wanting the lights to turn on when I got home at night,” Schoutsen says. “It grew from there.”
Like OpenHAB, Home Assistant doesn’t just let you create “events” that can be activated individually, but also create triggers based on “states” like the sun’s rising and setting.
“For example, I’ve rigged my system to keep track of where the sun is in the sky,” Schoutsen continues. “If Home Assistant knows that you’re at home, and the sun is below the horizon, the lights will come on. If you want to create different abstractions for certain conditions, that’s also very possible to do.”
Currently Schoutsen uses Home Assistant to control a combination of Philips Hue lights (which turn on and off as people leave the house) along with WeMo switches and Google Chromecasts. By opening the code up to other users, however, he hopes to see other APIs added.
“Because there are currently no open standards, smart device makers have to reach out to each vendor in turn to get support,” he says. “That’s something we can work on changing. I’ve so far integrated all of the devices I have at home, but by putting Home Assistant out there I’m hoping that other people will pick it up and take it in directions I never considered. That’s my goal. And the response has been great so far.”
ESP8266 might be the cheapest WiFi SoC for IoT application available, but it’s not really renowned for its power efficiency, and is often not considered the best choice for battery powered applications. This has not stopped Jeifa from developing a WiFi smartwatch based on the chip, and the 250 mAh battery used in the design is said to be good enough for 1 or 2 days of operation on a charge.
Click to Enlarge
Main components of Jeija’s ESP8266 smartwatch:
SoC – Espressif ESP8266 with 32Mbit of flash memory
Debugging / Programming – FT232RQ for USB communication
RV-3029-C2 Real Time Clock with alarm function, and temperature sensor
3x user buttons
250mAh LiPo battery
MCP73831 LiPo charger
Dimensions – 35 x 39 x 11mm (board)
He designed the PCB with Kicad, and the firmware is based on esp-open-sdk, with all hardware and software files released in github under respectively Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC-BY-NC-SA), and MIT licenses. Bear in mind that the project is mostly a hardware project, and the firmware is used to show the hardware works, so you may have to tweak the code to have the watch do what you want.
Jeija also shot a video showing how to solder the components and test the watch as you build it. Interestingly enough, he could not buy ESP8266-EX chip directly, so instead he bought ESP-01 module, and unsoldered the chip to solder it back on the watch PCB.
Between a vision for the IoT, and the successful deployment of IoT solutions that create real value for enterprises there is a gap that can be filled with any number of development options. Companies can try to build in house, partner with independent software vendors, retain software development experts, license existing solutions, and more.
Between IoT systems and endpoints there is an important layer – middleware – that can make or break a project’s success – operationally and economically.
Why recreate the wheel and develop your own middleware – or pay money to license or rent it – when you can easily build upon the foundation available at absolutely no cost to you? When thinking about costs, it is critical to look at the entire lifespan for a solution – from creating the prototype to the demo to the beta to a commercially scalable solution – think about what the true costs are. Build the financial business model for your solution as this will impact your pricing, your gross margins, and the value you create for your business.
Everyone knows that building competitive, modern products is challenging. To achieve success, engineers must stay focused on creating unique products that provide maximum value to the consumer. When developing such products, the ability to reuse existing components makes an engineer’s life a lot easier, helping him keep his eye on the prize to ensure a successful product launch. This is why we have built the Kaa IoT platform.
The Kaa IoT platform is designed with “universal patterns” in mind, thus able to support a wide variety of industries and applications. It dramatically reduces the cost associated with developing a connected product, accelerates time to market, and scales up organically as the business grows. Kaa’s powerful middleware functionality takes care of all of the back-end heavy lifting and allows vendors to concentrate on maximizing their product’s unique value.
Licensed under Apache 2.0, including the server and client components. Kaa is 100% free to use in open source or proprietary software with no royalties or fees.
Do the math on that, compared to proprietary, licensed alternatives. In my next post, I’ll cover the difference between open source IoT middleware and PaaS, including the economics.
The term “Open Source” in general refers to something that can be modified because its design is publicly accessible. Open source technology is defined as the production and development philosophy of allowing end users and developers to not only see the source code of software, but modify it as well. Ngrok is one such project developed by GitHub.
What is Ngrok?
Ngrok is a multiplatform tunnelling, reverse proxy software that establishes secure tunnels from a public endpoint such as internet to a locally running network service while capturing all traffic for detailed inspection and replay.
Image Source: https://ngrok.com/
Why Ngrok? & How we have deployed it in Vmoksha?
Before using ngrok, when we needed to expose a localhost application to web (internet) all we were doing is deploying the application in a server running a DMZ or we used to relocate the host to DMZ and configure NATing in the firewall. We also used to make DNS configuration in External DNS where the domain is hosted. In general, DMZ (De-Militarized Zone) is a computer host or small network inserted as a “neutral zone” between a company’s private network and the outside public network. It prevents outside users from getting direct access to a server that has company data. The following are the issues that we were facing before Ngrok deployment:
Unable to expose localhost application directly to internet without DMZ & other network configuration
Unable to demonstrate an application to Client on urgent basis
Unable to share websites for testing purpose
Develop any services which consume Webhooks (HTTP CallBacks)
Can’t share a website temporarily that is running only on our developer machine
Time Consuming on network and DNS configurations
Can’t debug or inspect HTTP Traffic in a precise manner
Can’t run networked services on machines that are firewalled off from the internet
Unable to expose application behind http proxy
Unable to forward non-http and non-local network services
Architecture before Ngrok deployment
Real-time Ngrok Usage in Vmoksha (A Case Study):
After using Ngrok, we had addressed all the about requirements and mainly it serves our business need in faster, secure and easy manner. Here is the following how we had used the features of Ngrok in our (Vmoksha) environment. In our scenario we used ngrok in Windows as follows:
As this is a small 9MB executable(.exe) tool we can be generally executed with Ngrok command followed by the port no which has to be exposed as follows,
Which gives a random subdomain on Ngrok.com and it’ll be accessible over both HTTP and HTTPS (Secure).
Running Multiple Tunnels Simultaneously:
We are using an extensive feature of Ngrok i.e., we can run multiple tunnels simultaneously and a few tunnel configuration is shown in the following YML configuration file.
Sample XML code for running multiple tunnels simultaneously (.yml file)
We can start all four tunnels simultaneously by using Ngrok start command followed by the names of the tunnels we want to start:
The Output Terminal will look something like this:
Request inspection with the Web Interface:
Web Interface is accessible on http://127.0.0.1:4040 using which we can inspect all the http traffic requests over the tunnel. We can also replay them to make the debugging quicker and easier.
Sample Web Interface for HTTP Traffic Inspection:
Architecture after Ngrok deployment
All in all, this is an amazing, secure and powerful tool that helps to meet our business needs on right time.
It’s so simple, really. What we all want is to build something that solves a problem in the best way we can. And we want it fast. And cheap. But it’s not so simple, after all. There are a myriad of combinations and permutations that will get you (eventually) from here to there. How can you ensure your IoT project gets off the drawing board and into the market?
Did you ever question about this -philosophical- issue? I did, and i’ve found an interesting perspective from Ashish Syal, Sierra Wireless.
Starting an Internet of Things (IoT) project is daunting. You may have an idea of what your end product or service is required to do (or you may not), but you don’t always know the best way to get there. So you start – you design a prototype, you source components, you build code, you test, and you find that the solution you thought you had just isn’t suitable after all – the connectivity technology you selected doesn’t work the way you need it to. So you start again.
And there’s the problem. Because of one component of your prototype, you have to start over completely, because your next choice requires a completely different connector, different hardware, and different code.
One of the ways the interoperability challenge can be mitigated is by establishing and using standards. Thoughtful and collaborative standardization improves choice and flexibility – developers can use devices from multiple vendors to customize a solution to meet their specific needs, and as a result, they can be more innovative and more cost-efficient in building their solutions.
Standards are necessary across the whole system and are being addressed by the industry in multiple ways. For example, industry standards organizations like oneM2M, a consortium of industry stakeholders, has developed technical specifications to address the need for a common M2M Service Layer that can be embedded within various hardware and software and relied on to connect a wide range of devices to M2M application servers. The group has published the oneM2M Release 1 specifications, which are available for download from www.onem2m.org.
Another complementary approach to standards development is the release of designs and specifications developed by industry ecosystem players into the open source community as open hardware and interface standards for others to adopt. This approach has been growing in popularity lately, with open hardware reference designs and open interface standards becoming more readily available, and with major industry players collaborating to support them.
Platforms like these enable developers with limited hardware, wireless, or low-level software expertise to develop applications in days rather than months. If executed properly, these can significantly reduce the time and effort to get prototypes from paper to production by ensuring that various connectors and sensors work together automatically with no coding required. With industrial-grade specifications, these next-generation platforms not only allow quick prototyping but also rapid industrialization of IoT applications, because they can go straight from prototype into production.
An open source application framework also provides a wealth of resources, including online code libraries and developer communities, which give IoT application developers a head start in getting their products to market. One example of this, the Legato embedded platform, developed by Sierra Wireless and released last year, is free to download, can be embedded on any application processor and simplifies development of IoT applications.
There are four rules to understand when building products out of open source software. A product team (engineering, product management, marketing) needs to understand these rules to participate best in an open source project community and deliver products and services to their customers at the same time. These four rules are the start of all other discussions about the open source product space.
Rule #1: You ALWAYS get more than you give
The investment over time in a technology follows a normal distribution. Think about the investment in open source projects as a stacked bar chart where company and individual contributions are taken together and replace a single company’s investment. So the collected investment looks the same in an open source project as a single company’s investment looks when developing closed proprietary software products. Individuals and companies contribute to meet their own selfish needs. It’s a perfect asymmetric relationship where the contributor gives up some thing relatively small in value (their contributions) and gets something substantial in return (an entire working piece of software). One can look at Openstack or the Linux kernel to see this activity best in well measured ways. Instead of viewing this as giving away IP, it needs to be looked at rightly as gaining all the rest of the IP.
Lines-of-code and the COCOMO calculations come from Openhub.net crawling repositories. I understand exactly how fraught lines-of-code is. I understand the concerns over the accuracy of COCOMO, but they are representative models if not perfect ones, and they show the trends appropriately.
Rule #2: Don’t confuse projects with products
This one is sometimes hard to understand. First, we need to assume we’re talking about a well-run, successful open source project. (More on this in rules #3 and #4.) A project is a collection of working software that installs and runs and solves an interesting problem. It’s a collaboration and conversation in code between a relatively small number of people developing the software that have write access on the software repositories (i.e. committers) and hopefully a larger set of users and contributors. A product is something that solves a customer’s problem for money.
Projects are NOT products. While a lot of excellent software can come out of a well-run open source project that relieves some of the work for engineering (see Rule #1), there is enormous work still to be done to turn it into a problem-solving product for customers. The Linux kernel is a project. Fedora is a distro project. RHEL is a product. “But what about Ubuntu,” you cry? It’s a variation on the business model. Ubuntu is a distro project. The Long Term Support (LTS) editions are the basis of multiple products for Canonical.
Products meet customer expectations of value for money. They install out of the box, run, and come with warranties and indemnifications, services (support, upgrades, training, consulting), and documentation. The product may be a service or hardware wrapped around the project. Products are as varied as markets of problems customers want solved for money. While good projects tick the first two boxes (install, run), they don’t tackle the customer focus the same way. Projects also solve much narrower problems than customers want solved.
And don’t be confused about which open source licenses are involved and whether they’re “business friendly” or not. Different vendors use different strategies around different licenses. There are success stories and failures around every major OSI approved license. The license is irrelevant in comparison to business execution.
Rule #3: Don’t confuse communities with customers
This rule is tightly woven together with Rule #2, and if anything harder to understand. If Rule #2 is about engineering and business model, Rule #3 is about messaging and sales. Communities and customers live in different value spaces. Communities have time, but no money. Customers have money, but no time. Perhaps a better statement is that customers spend money to expedite a solution and remove risk, while communities (individuals in community) have no money.
Traditionally, engineering feeds products into the pipeline, marketing feeds messages, and sales pulls qualified leads through into closed deals. A simple matter of execution. Many many companies using open source think that the project community is a part of this pipeline, and they further believe this when they find customers in community forums. They may even think the community project is a try-before-you-buy. All of this is WRONG.
The conversations that a company (product management, engineering, marketing) has with its relevant communities and conversations with paying customers are different conversations. Each conversation has specific tools and rules of engagement. Successful companies understand how to have these conversations. There are well understood tools for building and qualifying pipelines. There are equally well understood tools and rules for developing successful communities (Rule #4). Each tool chain and conversation has different metrics to capture and consider.
There IS interaction between a company’s community and customers. Community members are evangelists for the project (so there’s value to link it to the company brand in thoughtful ways). Community members provide support and expertise to potential customers that are self-qualifying in the project community before re-joining the product pipeline. Community also provides inertia for the ultimate product solution by being a sink for expertise and time invested. The challenge is to keep things crisply separate between the community and customers such that you can quickly and easily recognize what role the person in front of you is playing and guide them appropriately. There must never be confusion in the messages (deliberate or otherwise).
For example, the product is for customers. If you have a trial edition, as in try-before-you-buy, then the “buy” word is there, so, customer conversation. If you have a community edition, then build a community (Rule #4), because otherwise you’re simply publishing software under an open source license without gaining any of the benefits of an open source community. These are separate things, which brings us to the final rule.
Rule #4: Successful open source project communities follow well-understood patterns and practices
All successful open source community projects follow the same set of patterns and practices. The project starts as a conversation in code around a small core of developers. There are three on-ramps that need to be built. First drive use and grow the user base, because that will lead to developers finding your project. (You NEED freeloaders! It means you’re doing it right.) The software has to be easy to install and run. Users will tell you what they need, i.e. you get bug reports and feature requests in return for getting this right. More importantly, developers find you.
Second, make it blindingly easy to build the software into a known, tested state. This will allow developers to self-select and experiment for their own needs. Assuming a smart developer will figure it out is throwing away developer-cycles. They won’t. No one wants to waste their time on your laziness and lack of discipline. They’ll leave in frustration and disgust. Getting them back will be very hard if not impossible. Get this right and you’ll get the next set of harder bug reports and likely suggested fixes.
Third and last, tell developers how and where to contribute and make it easy to do. Thank them for the contributions. If things other than code are to be encouraged, set up those contribution channels as clearly and make them easy. Regularly say “thank you.” Reward folks anyway you can, especially when you’re a company.
Building communities is hard work. It doesn’t come for free. It does, however, bring value with it in terms of contributions from users and developers, as well as stickiness for the technology.
The last collection of practices in this space is around understanding the role of foundations and open source software. Foundations organize and clarify IP management regimes. Foundations can do many other things, but if they don’t get this central thing right, then they’re a failure for the project community’s potential for growth. Clarifying neutral IP ownership allows growth for dedicated investment from participants and contributors interested in growing the entire ecosystem, i.e. companies trying to solve problems for customers.
Foundations create neutral space in which companies can participate on equal footing. A company building products out of open source projects they didn’t start and own (e.g. SUSE and Linux, HP and Openstack, etc.) need to understand clearly how their contributions are handled and that they aren’t simply building someone else’s product. Likewise, a company that has started an open source project and wants to drive adoption and growth of an ecosystem around it would do well to contribute the project software IP to a separate non-profit foundation (or create one if appropriate) such as what Google is presently doing with Kubernetes, or Pivotal has done with Cloud Foundry. This is ultimately a fourth on-ramp to get right.
CUPERTINO, Calif. – The first general-purpose graphics processor (GPGPU) now available as open-source RTL was unveiled at the Hot Chips event here. In a separate talk, another academic described an integrated open source processor with a core that’s more power efficient than a similar block from ARM.
Although the GPGPU is in an early and relatively crude stage, it is another piece of an emerging open-source hardware platform, said Karu Sankaralingam, an associate professor of computer science at the University of Wisconsin-Madison. Sankaralingam led the team that designed the Many-core Integrated Accelerator of Wisconsin (MIAOW).
“An open source hardware platform is emerging that has inherent value,” said Sankaralingam. “We really need more people contributing to open source hardware to improve the platform layer so there’s enough for entrepreneurs to build from it,” he said.
One of Sankaralingam’s students was working on an out-of-order processor design when AMD released as the instruction set architecture of one its graphics chips. That sparked the idea to build an open-source GPU.
A 12-person team developed the MIAOW core in 36 months. Their goal was simply to create a functional GPGPU without setting any specific area, frequency, power or performance goals.
The resulting GPGPU uses just 95 instructions and 32 compute units in its current design. It only supports single-precision operations. Students are now adding a graphics pipeline to the design, a job expected to take about six months.
Matlab’s easy to use interface, its power, and flexibility definitely make it a deservingly popular and useful software. But admit it, in bad times this propitiatory software can burn your pocket! So here we bring 4 free and open source alternatives of Matlab which can help you do the same work or even better at zero cost! Enjoy!
1. Scilab: This is Free Software used for numerical computation. It also comes with a high-level programming language. Scilab began as a university project, but has since become much more than that. Its development is presently sponsored by Scilab Enterprises, which also provides paid professional services around the application.
2. GNU Octave: Popularly known as Octave, its official website describe it as, “High-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation.”
It’s one of the best free software for that kind of job and you rarely have to employ Matlab. There are many workarounds for examples the slow loops can be replaced by precompiled modules written in C.
3. Sagemath also known as Sage, is a unified interface of a suite of more than 100 Free Software applications. These apps put together becomes a suitable alterbative of Matlab for elementary to advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, etc.
Sagemath’s UI described it as, “A notebook in a web browser or the command line. Using the notebook, Sage connects either locally to your own Sage installation or to a Sage server on the network. Inside the Sage notebook you can create embedded graphics, beautifully typeset mathematical expressions, add and delete input, and share your work across the network.”
4. Genius: Popular as Genius Math tool (GMT), one is another alternative of Matlab with some cool features. The tool offres a built-in interactive programming language called GEL (Genius Extension Language). This started as a simple GNOME calculator, but morphed into something more powerful and useful.
GMT’s website officially described it as: “General purpose calculator program similar in some aspects to BC, Matlab, Maple or Mathematica. It is useful both as a simple calculator and as a research or educational tool. The syntax is very intuitive and is designed to mimic how mathematics is usually written.”