Alagappan Karthikeyan, Author at COBRA softwares
07 Jul

Facebook uses your location data to suggest new friends, despite huge privacy concerns

Facebook’s “People You May Know” has always been an interesting algorithm to watch in action. While we can’t claim to have analyzed its function in-depth, it’s fascinating to see how the addition of a single friend can change the people Facebook thinks you might like to follow. Add a few high school buddies, and you’ll see more of your fellow graduates. Add a business contact or family member, and the same thing happens.

A recent investigation by Fusion.net found that Facebook doesn’t just prowl your phone contacts or mutual friends to find potential suggestions. The company also uses your location data to match you with other Facebook users, unless you’ve specifically configured your smartphone not to share that information.

A Facebook representative told Fusion: “We show you people based on mutual friends, work and education information, networks you’re part of, contacts you’ve imported and many other factors.” Simply being in the same location as another person isn’t supposed to trigger a “PYMK” notification, but the Fusion story discusses the case of a parent who attended an anonymous group meeting of parents with suicidal teenagers. The next morning, Facebook suggested that one of the other anonymous parents at the meeting was a person that the first parent might know. None of the parents in question had exchanged any contact information, and while Facebook claims that the two individuals must have had some other network in common, the episode highlights just how much data the company scoops up on its user base, as well as how little we really know about what the company does with that information.

Ongoing research into the so-called “six degrees of separation” has confirmed that the gap between two random people is often much smaller than you might think, which means it may not be particularly difficult for Facebook to find a common network between virtually any two people. Without knowing more about the company’s practices it’s impossible to know how two people attending the same anonymous event, neither of whom knew each other previously, wound up being matched together.

As privacy dwindles, real-world risks mount

Earlier this month, we wrote about Facebook’s aggressive use of location tracking as part of its new mobile advertising rollout. Going forward, Facebook will tell advertisers if their local ads drive additional foot traffic to stores and share aggregate demographic information on the customers in question. Obviously a feature like this can’t work if you don’t enable location services or connect to WiFi, which is why Facebook’s iOS and Android clients recommend leaving location services enabled.

FB-Funny

The fact that two people who attended a deliberately anonymous meeting were “outed” to each other on Facebook is one small example of how continually chipping away at user privacy can have significant real-world consequences. Just because you visited a bar, went to a concert, or saw a movie doesn’t mean you’re interested in being friends with every single person you shared the experience with. People seeking treatment for mental or physical illness may not be interested in meeting people who share these traits, or may simply wish to keep their private lives private.

At its best, Facebook’s PYMK is a useful way to find old acquaintances or friends you’ve lost touch with. But there are risks associated with simply smashing people’s profiles together based on unknown network linkages and proximity to one another. We recommend disabling location services for Facebook unless you absolutely need to leave them on. There’s no reason to hand the company a comprehensive record of where you go and when you went there.

Share this
06 Jul

Microsoft confirms Surface 3 production will end this year, has no public plans for a follow-up

Microsoft has confirmed that its Surface 3 tablet, already in short supply, is in the process of being phased out. Microsoft will stop distributing the device by December at the latest, with no word yet on whether the company will replace its lower-end Surface hardware. In a statement sent to press, a Microsoft spokesperson said:

Since launching Surface 3 over a year ago, we have seen strong demand and satisfaction amongst our customers. Inventory is now limited and by the end of December 2016, we will no longer manufacture Surface 3 devices.

The original Surface RT and Surface Pro were announced just over four years ago, but neither was a smash hit out of the gate. The Surface RT was built on an underpowered Nvidia Tegra 3 and Microsoft badly flubbed its messaging on how its ARM-compatible version of Windows differed from its x86 counterpart. Surface 2 offered much-improved performance courtesy of Nvidia’s Tegra 4, but it was Surface 3 that returned the lower-end Surface platform to the x86 arena, courtesy of Intel’s x7-Z8700 SoC. That chip offers a base clock of 1.6GHz, a 2.4GHz base frequency, two LPDDR3-1600 memory channels, and a Scenario Design Power (SDP) rating as low as 2W.

Of the three non-Pro Surface tablets, the Surface 3 was by far the best-received of the bunch, which makes Microsoft’s cancellation with no word of a successor a bit of a surprise.

The Surface 4 SoC conundrum

Normally, Microsoft would continue adopting Intel’s lower-end Atom SoCs for Surface devices while relying on a mixture of Core M and Core i3/i5/i7 parts for the Surface Pro or Surface Book families. Intel’s decision to cancel its smartphone and tablet products undoubtedly threw a wrench into these plans, and it’s not clear what alternate hardware Microsoft could even use.

Surface3

Surface 3, without its Type Cover. Small tablets need small CPU cores.

Intel doesn’t provide TDP figures for its Atom Z8700 family and it doesn’t give SDP ratings for its Core M hardware. The lowest TDP configuration for current Core M chips is 3.5W — respectably low, but not a useful point of comparison since we don’t know how the two metrics relate to each other. Intel’s list prices, on the other hand, are a matter of public record — and the $281 price tag on a Core M is far above the $37 list price for an Atom SoC. You can say good-bye to x86 2-in-1’s at $400 – $500 price points if OEMs have to move to Core M processors.

AMD doesn’t appear to have anything that would fit Microsoft’s needs, either. The company made a few overtures to the tablet market several years ago but never seriously tried to enter the market. An updated version of AMD’s Puma+ SoC built on 14nm might have been able to address this space, but AMD decided not to update its cat cores past the 28nm node (at least, not in the PC space).

Rumors suggest that Microsoft might have held off on updating the Surface family this year so it can launch new hardware alongside its next major Windows 10 release, codenamed Redstone 2 and expected to arrive in early 2017. Redmond’s options for a new Surface 3 successor, however, will still be quite limited. It can opt for Apollo Lake and accept higher power consumption, but the increased thickness and noise wouldn’t play well with consumers and Microsoft isn’t going to launch an ARM-only Surface 4. Keep in mind, all of this discussion applies only to the standard Surface family — Microsoft is expected to update the current Surface Book and Surface Pro 4 later this year or early next.

The simplest path for Microsoft to take would be to kill Surface 3 outright, keep iterating on the Surface Pro family, and let third-party OEMs like Dell and Asus handle the lower-end of the market. It would be disappointing to see the lower-end Surface line die just after it finally found secure footing. Unless Intel is willing to build custom hardware for Microsoft’s relatively limited needs there may not be a replacement solution on the market.

Share this
05 Jul

Chrome OS gets an experimental storage manager in dev channel update

chrome-640x353

 

Google’s Chrome OS launched almost exactly five years ago, and it’s come a long way since then. What once was simply a series of browser windows that couldn’t even keep themselves open in the background has gained support for offline services, multiple displays, and even Android apps now. However, there are still some basic things Google needs to shore up, like the lack of storage management. That’s starting to change in the latest dev release.

Chrome OS (like Chrome) has three channels — stable, beta, and dev. The dev channel is the first stop for new features that eventually filter down to beta and stable. It’s by definition an unstable release that has more bugs than the other channels. The latest dev release is v53, and it is here that Google has included a secret storage manager interface. Chrome OS also has a basic file manager, but it shows startlingly little information about your remaining storage. The other current alternative is to download a third-party system info app like Cog.

According to Googler François Beaufort (who also developed the aforementioned Cog tool), the experimental storage manager is disabled by default. To enable it, you’ll have to venture into the Chrome flags to chrome://flags/#enable-storage-manager. Enable that flag and restart the device to see the changes. The storage manager pops up as a new button in the settings under “Device.” It shows you the total capacity, space used but downloads and offline files, and remaining available. The “downloads” and “offline files” labels are links to the relevant settings as well. It is still missing support for external storage, though.

Screenshot-2016-06-24-at-1.22.35-PM-640x514

Chrome OS is essentially a system-level Linux kernel that runs Chrome with a custom window management environment. It’s not literally just Chrome, but many of the platforms features are based on Chrome web apps. Until recently, the vast majority of the tools available on Chrome OS were web-based. You could cache data offline via services like Drive (and Dropbox with some extra legwork), but your local storage on a Chromebook was never really eaten up. That may very well change with the addition of Android apps.

Android is designed to run apps locally and store data cached on a device. This has the potential to make Chrome OS much more powerful, but it requires new ways to manage everything. Android apps via the Play Store are only supported in the dev channel on the Chromebook Flip right now, but it will come to most Chrome OS devices in the next couple months.

 

 

Share this
05 Jul

8×8 3mm RGB LED Matrix

8x8-mini-matrix-2-100x100

A high-brightness, full-colour RGB matrix compatible with Colorduino and Rainbowduino and suitable for breadboard and Arduino usage. Can be used to design Advertising boards, display messages, etc.

This 8×8 RGB Matrix LED is perfectly suited for breadboard, Arduino, and Colorduino, Rainbowduino and most other popular RGB matrix-driving boards. The product is a compact board- 6cm x 6cm, but bright enough to be noticed from a distance.

Features:

  • 100% brand new and high quality
  • Easy to controlled by your Arduino board and other Microcontroller boards.
  • 3mm LED’s are clear for maximum brightness.
  • This display is fully compatible with Rainbowduino.
  • Compatible with Arduino RGB LED Matrix Driver Shield Colors Shield
  • Pins:32
  • Wavelength:467nm, 523nm, 623nm
Share this
04 Jul

R.I.P Ian Murdock, Founder of Debian Linux, Dead at 42

debian-ian-murdock-died
Ian Murdock, the founder the Debian Linux operating system and the creator of apt-get, has passed away.
Yes, it is very sad to announce that Ian Murdock is not between us. His death has touched the entire software community. He was just 42.

 

The announcement of Murdock death came out via a blog post on Docker website, where Murdock was working as a member of the technical staff.

The cause of death is unclear at present, but Murdock tweeted the same day that he would commit suicide that night. His Twitter account had since been deleted.
However, at that time, some people speculated that Murdock’s account had been hacked and that the tweets were not by him.
 Murdock posted some Tweets on Monday suggesting he had been involved in a police case and has been beaten by the police and charged with battery.

However, neither Docker, nor the San Francisco Police Department immediately commented on Murdock’s actual cause of death.
Murdock developed Debian in August 1993 — and yes the “ian” in the project’s name is from Ian Murdock.

In the past, He also worked at Sun Microsystems and served as the CTO of the Linux Foundation.

It is with deep regret that we say goodbye to our great friend, the most significant contributor to Linux community, our beloved Ian Murdock.

 

Share this
04 Jul

KiloCore project packs 1,000 CPU cores into tiny power envelope, 32nm process

It’s been almost a decade since CPU developers began talking up many-core chips with core counts potentially into the hundreds or even thousands. Now, a recent paper at the 2016 Symposium on VLSI Technology has described a 1,000-core CPU built on IBM’s 32nm PD-SOI process. The “KiloCore” is an impressive beast, capable of executing up to 1.78 trillion instructions per second in just 621 million transistors. The chip was designed by a team at UC Davis.

First, a clarifying note: If you Google “KiloCore,” most of what shows up is related to much older IBM alliance with a company named Rapport. We reached out to project lead Dr. Bevan Baas, who confirmed to us that “This project is unrelated to any other projects outside UC Davis other than that the chip was manufactured by IBM. We developed the entire architecture, chip, and software tools ourselves.”

The KiloCore is similar to other many-core architectures we’ve seen from other companies, in that it relies on an on-chip network to carry information across the CPU. What sets the KiloCore apart from these other solutions is that it doesn’t include L1/L2 caches or rely on expensive cache coherency circuitry.

Table1

From the VLSI paper.

The historic problem with attempting to build large arrays of hundreds or thousands of CPU cores on a single die is that even very small CPU caches drive up power consumption and die size very quickly. GPUs utilize both L1 and L2 caches, but GPUs are also designed for a power budget orders of magnitude higher than CPUs like KiloCore, with much larger die sizes. According to the VLSI whitepaper, KiloCore cores store data inside very small amounts of local memory, within other nearby processors, in independent on-chip memory banks, or in off-chip memory. Information is transferred within the processor via “a high throughput circuit-switched network and a complementary very-small-area packet-switched network.”

Taken as a whole, the KiloCore is designed to maximize efficiency by only spending power to transfer data when that transfer is necessary for a given task. The routers, independent memory blocks, and processors can all spin up or down as needed for any task, while the cores themselves are in-order with a seven-stage pipeline. Cores that have been clock-gated to off leak no power at all, while idle chips leak just 1.1% of their estimated energy consumption. Total RAM in the independent memory blocks is 64KB * 12 blocks, or 768KB total and the entire chip fits into a package measuring 7.94 mm by 7.82 mm.

Why build such tiny cores?

The numerous research projects into many-core architectures over the past 5-10 years are at least partly a reaction to the death of single-core scaling and voltage reductions at new process nodes. Before 2005, there was little reason to invest in building the smallest, most power-efficient CPU cores available. If it took five years to move your project from the drawing board to commercial production, you’d be facing down Intel and AMD CPUs that were cheaper, faster, and more power efficient than the cores you started off trying to beat. Issues like this were part of why cores from companies like Transmeta failed to gain traction, despite arguably pioneering power-efficient computing.

The failure of conventional silicon scaling has brought alternate approaches to computing into sharper focus. Each individual CPU inside a KiloCore offers laughable performance compared to a single Intel or even AMD CPU core, but collectively they may be capable of vastly higher power efficiency in certain specific tasks.

“The cores do not utilize explicit hardware caches and they operate more like autonomous computers that pass information by messages rather than a shared-memory approach with caches,” Dr. Baas told Vice. “From the chip level point of view, the shared memories are like storage nodes on the network that can be used to store data or instructions and in fact can be used in conjunction with a core so it can execute a much larger program than what fits inside a single core.”

The point of architectures like this is to find extremely efficient methods of executing certain workloads, then adapt said architectures to further adapt for efficiency or improve on execution speed without compromising the extremely low power consumption of the initial platform. In this case, the KiloCore’s per-instruction energy can be as low as 5.8 pJ, including instruction execution, data reads/writes, and network accesses.

 

Share this
03 Jul

WIFI Enabled Universal Remote

zmote-front

Zmote is a Wi-Fi connected IR blaster with learning and monitoring capabilities. You can use it as a Universal Remote by placing it in your living room and controlling all your gadgets (TV, STB, AVR, Media PC, etc) from your phone or over the web using a regular browser. We support over 3000 remotes out of the box, so you will most likely find your devices already available to use with zmote.
WATCH ZMOTE IN ACTION HERE: https://goo.gl/36ym2E
WATCH ZMOTE AS A UNIVERSAL REMOTE: https://goo.gl/DOadPJ

Share this
03 Jul

BMW i3 batteries get recycled for home power backup

BMW will provide whole house battery backup power by making use of used “second life” batteries pulled from older BMW i3 electric vehicles, either the 22-kWh battery packs or the 33-kWh newer battery packs. It provides more power than Tesla’s slick-looking PowerWall system, enough to keep the typical American home powered up for a day. The battery system is charged by solar panels.

Separately, BMW announced ChargeForward, a prototype time-shifting charging program that lets the utility company defer charging of BMW i3s by up to an hour if there’s heavy demand on the power grid. The announcements came this week at EV29, the Electric Vehicle Symposium & Exhibition 29 held in Montreal.

P90149928-highRes-640x427

Expandable backup power from BMW’s boxy system

The BMW i3 battery system comprises two or more modules attached to the garage wall: a controller (the all-blue box in the photo) and a BMW i3 battery pack. While BMW touts the value in finding a second life for i3 battery packs, it could also be a new i3 pack as well. The owner could add additional i3 battery pack modules. BMW says the average American home draws 15-30 kWh a day, so each battery module would be good for about one day providing electricity back to the house, supplemented by the solar energy charging the battery.

BMW says that after years of charge-discharge cycles that may affect driving range or acceleration, the batteries are still well suited to the less-demanding task of providing power to a home. BMW did not discuss pricing or system lifespan in its announcement. It’s currently in concept phase. BMW said it’s working with Beck Automation of Wurzburg, Germany.

The battery backup system is part of BMW i 360º Electric, which BMW says “encompasses products for home and public charging, assistance services, and flexible mobility …. [and] will also provide customers access to an energy-conscious, sustainable lifestyle.” For instance, BMW South Africa is testing, with owners of  BMW i3 and i8 vehicles, a solar carport (photo above) that provides direct charging of the car.

According to Cliff Fietzek, manager of connected eMobility at BMW of North America, “The remarkable advantage for BMW customers in using BMW i3 batteries as a plug and play storage application is the ability to tap into an alternative resource for residential and commercial backup power, thus using renewable energy much more efficiently, and enabling additional revenues from the energy market.”

ChargeForward for time-shifting charging times

BMW is working with California’s Pacific Gas and Electric Company (PG&E) and 100 BMW i3 owners in the San Francisco Bay area on “demonstrating how intelligent management of EV charging can contribute to optimizing electric power grid efficiency, and thus is expected to reduce total cost of electric vehicle ownership.” The owners agree to let PG&E delay charging by up to an hour, or interrupt the charging, if there’s excessive drain on the power grid and PG&E couldn’t handle the demand, or would have to bring on-line costlier or higher-polluting power sources.

The reduced TCO may refer to PG&E providing charging at lower rates.

TeslaPowerwall_garage-640x298

What Tesla and Nissan are doing

Tesla’s Powerwall is the best-known household backup or supplementary power system. It has a slick name, it’s in a sleek box (BMW’s power box apparently didn’t get any help from BMW SoCal DesignWorks consultancy), and it was announced in early 2015. Tesla announced both 7-kWh (actually 6.4 kWh) and 10-kWh power backup systems. Tesla killed the 10-kWh PowerWall earlier this year and made clearer that the 6.4-kWh unit works best as a supplementary or booster power supply. The economics apparently didn’t work out: Generators can provide emergency power for days or weeks.

In Japan, Nissan in 2011 introduced an emergency power system that used the battery already in a Nissan Leaf EV to provide a day or two of power to the typical Japanese home, which uses half the energy of a US home. The system included a larger charging unit that had the electronics and switching capability to deliver the car’s power to the house.

 

Share this
02 Jul

Smart Meters Reveals Internet Of Things Challenges,Opportunities

 

There are a number of similarities between Smart Meters and the Internet of Things, and it would be interesting to compare these, as the first deployment of Smart Meters was over 10 years ago, while the deployment of Internet of Things has just started.

Both are about deploying millions of devices and require levels of management and associated services.

This should allows us to draw lessons from the implementation of Smart Meters that may help to illustrate what challenges the Internet of Things will face in the future.

There are common areas between Smart Meters and the Internet of Things:

  • Communication infrastructure: all of these objects (Meters or Objects) must have a way to deliver information or to receive commands. This in turn defines what communication layer will be set up. Today energy companies have build their own communication infrastructure or rely on Telecom partners. The latter solution is still limited mainly due to cost and level of services guarantees; communication’s reliability is a critical need for the Utilities. In both cases, the solution interconnection mode is only hierarchical and as strong as the weakest component. The Internet of Things can only deliver full power if it can benefit from an efficient, reliable, flexible and cooperative communication infrastructure model.
  • Data Tsunami: Smart Meters generates data volume without any comparison with the past. Management of data transfer and storage of these Smart Meters’ data generates new jobs in Utilities (collection management) and creates a radical change in both business processes and organization. Utilities have realized that this is now the new “Business as Usual”. The introduction of the Internet of Things in an existing organization needs to be thought through at an early stage to address all impacts on its activities and be taken into account at the IT level. This gives an advantage to new comers that start from a green field.
  • Big Data Analytics: Data is a wealth of information if you know how to use it. Analytics on Smart Meter data at the distribution network and customer level shows optimize processes (Power Failure Recovery), reshape services (Fraud Management, Demand-Response) or identifies new services (Energy erasing for residential customers). This requires creation of models and to build associated analytics. Internet of Things by its sensors proliferation will require innovative models in order to deliver all the potential value. Smart Cities for example would be one of the main beneficiaries.
  • Customer interaction: The development of Smart Metering is revolutionizing customer interaction with Utilities. It created mistrust (see Safety point below) but also gives opportunity to change the type of relationship by accelerating digitalization. It changes the way customer interaction from a resource provider to a service provider position. Internet of Things will have to provide high quality services to build a trust relationship with their customer.

A domain which is important in both Smart Meters and Internet of Things is protection. It comes along several axes:

  • Safety: with radio frequency communication layer, North America customers are becoming reluctant about installation of Smart Meters, which they consider being a risk for their health. Rightly or wrongly (depending on the technology used), this concern would have to be taken into account for Internet of Things generalization.
  • Security: Whether using encryption for data exchanges or avoid any system intrusions to access to remote controls (disconnect Smart Meter), security is a major issue for massive deployment of Smart Meters. Internet of Things will be confronted to this major concern for example when health or Smart Cities services rely on the devices.
  • Privacy: Data access and use of the data collected are subject to numerous debates regarding Smart Meters: you may deduct when you are eating, whether you are at home or not, etc … The data is available, you simply need to interpret it. Internet of Things will have to provide answers to this question in order to avoid rejection in some cases.

There are still some differences between Smart Meters and the Internet of Things, among which are:

  • Smart Meters regulatory environment is very strong, energy being managed at country level this area being considered as a national sovereignty stake. For the Internet of Things we can imagine that regulation will be less restrictive due to its global coverage. Although there will be some exceptions (e.g. Health),
  • Standardization: There are no standards at present to get worldwide Smart Meter interoperability. This is related mainly to national regulation but also to a highly fragmented market in terms of suppliers. Internet of Things due to its global approach and worldwide companies, should find the market will generate standards that will facilitate its generalization.
  • Mobility: by nature a Smart Meter is fixed. Part of Internet of Things will be related to fixed objects, although the majority of applications will be related to mobility.

In summary, Smart Meters actually appear as the first major operation which prefigures full scale deployment and operations of the Internet of Things.

It would be wise to take this first experience, even if it is not fully transposable, to draw lessons and prepare more effectively the future of our near future which will affect all areas of our day to day life: health, smart city, mobility, work, consumer usage, security, etc.

Share this
02 Jul

IBM to deliver 200-petaflop supercomputer by early 2018; Cray moves to Intel Xeon Phi

More supercomputer news this week: The US is responding to China’s new Sunway TaihuLight system that was announced Monday, and fast. First, the Department of Energy’s (DOE) Oak Ridge National Laboratory is expected to take delivery of a new IBM system, named Summit, in early 2018 that will now be capable of 200 peak petaflops, Computerworld reports. That would make it almost twice as fast as TaihuLight if the claim proves true. (We had originally reported in 2014 that both Summit and Sierra would achieve roughly 150 petaflops.)

TaihuLight (pictured below) now sits at number one on the twice-yearly TOP500 list of the fastest supercomputers in the world, with a Linpack benchmark score of 93 petaflops and a claimed peak of 124.5 petaflops. The latest TOP500 announcement Monday caused a bit of a stir. Not only is TaihuLight roughly three times faster than China’s Tianhe-2, the prior champion, but it also uses no US-sourced parts at all for the first time, as it’s powered by Sunway 260-core SW26010 processors that are roughly on par with Intel Xeon Phi, as well as custom proprietary interconnect.

sunway-taihulight-640x353

In turn, Summit will employ IBM Power9 and Nvidia Volta GPUs. Summit will deliver over five times the computational performance of Titan’s 18,688 nodes using only about 3,400 nodes. Each node will have “over half a terabyte” of so-called coherent memory (HBM + DDR4), plus 800GB of non-volatile RAM that serves as a burst buffer or extended memory.

Screen-Shot-2016-06-22-at-9.28.36-AM

Titan (pictured below), meanwhile, is currently #3 on the TOP500 list, and resides at the DOE Oak Ridge National Laboratory, where the new IBM Summit system will be located. In a statement via Computerworld, the DOE pointed out that since 1993, US supercomputing capabilities “have grown exponentially by a factor of 300,000,” and that “high-performance computing remains an integral priority for the DOE.” (For more on Oak Ridge’s earth, energy, climate, and geographic science research using these systems, visit its dedicated supercomputing page.

Titan-1344-640x353

The Linpack benchmark has emerged as a singular yardstick for measuring the performance of complex supercomputers. It doesn’t record overall performance in all situations; rather, it measures the performance of a system when solving a “dense system of linear equations” that gives a good approximation of real-world (as opposed to peak) performance. Read more about the benchmark at the above link.

products_xc_40_photo_01-640x299

Separately, Cray announced this week at the 2016 International Supercomputing Conference in Frankfurt, Germany that its Cray XC systems are now available with the latest Intel Xeon Phi (Knights Landing) processors. The company said the new XC systems, which feature an adaptive design that supports multiple processor and storage technologies in the same architecture, deliver a 100% performance boost over prior generations. Cray also unveiled the Sonexion 3000 Lustre storage system, which can deliver speeds of almost 100GB/sec in a single rack.

Share this

© 2015-2020 COBRA Softwares Pvt Ltd. All rights reserved.