Technology Archives - COBRA softwares
23 May

The world’s Smallest Satellite by an Indian School Student


Indian Boy Invented Smallest satellite
 

NASA Space Settlement Contest

The NASA Space Settlement Contest is an annual design competition for 6-12th grade students conducted by NASA. The participating student’s age must in between from 11 to 18 from anywhere in the world.

The goal of this contest is to have students learn about science and teamwork while working on projects to develop space settlement designs.

 

Inventor

A student named Rifath Sharook, 18 years old invented the world’s smallest satellite. He created as part of a NASA-sponsored contest. The satellite weighs only 2.26 ounces (64 grams).He named it as KalamSat, who is the former Indian President Dr APJ Abdul Kalam ‘s name.It is completely compact and light weighted


Smallest satellite , Lightest satellite by Tamil Nadu higher secondary boy
Talking about the experience he said,

“Actually, we are feeling very happy. Actually, this contest was judged by scientists and engineers in NASA. One of the scientists from Colorado space grant consortium has appreciated us using an idiom. Actaully he said ‘we nearly fell out of the chair’. Actually, that means he was so excited and he was so amazed.”

-Rifath Sharook

In Times of India he said,

“The main role of the satellite will be to demonstrate the performance of 3D-printed carbon fibre, We did a lot of research on different cube satellites all over the world and found ours was the lightest.”

-Rifath Sharook


Smallest satellite , Lightest satellite by Rifath Sharook , Honoured with Certification

NASA will launch the tiny satellite on June 21 from Virginia. They’ll launch it on a SR-4 rocket from Wallops Flight Facility,Talking about the mission he said,

“It will be a sub-orbital flight and post-launch, the mission span will be 240 minutes, and the tiny satellite will operate for 12 minutes in a micro-gravity environment of space.”

-Rifath Sharook

He proves that the India is an un-avoidable country for inventions and source of geniuses. Cobra Softwares is proud for making such a wounder-full article on him.
Happy Reading,

by
Praveen Ez

Share this
21 Jul

SoftBank is reportedly buying chip designer ARM for $31 billion

SoftBank has agreed to acquire British chip designer ARM Holdings for £23.4 billion (about $31 billion), the Financial Times reports, making the deal the biggest ever purchase of a European technology company. The publication says the official announcement of the acquisition will come on Monday morning, with SoftBank paying £17 for each share in the British company, weeks after the UK voted to leave the EU and the pound plummeted in value.

The British vote to leave the European Union likely made ARM a juicier target to foreign companies looking for investments, but as the Financial Times notes, the chip designer’s global significance protected it from the worst of the precipitous drop in the pound’s value following the decision. ARM is comparatively small in terms of revenue compared to chipmaking giants like Intel, but the company has punched above its weight on the global market, licensing its tech to huge firms like Apple and Samsung and collecting royalties on each chip made. 15 billion ARM-designed chips shipped last year, an increase of 3 billion on the year before, with around half of those going into mobile devices.

15 BILLION ARM-DESIGNED CHIPS SHIPPED LAST YEAR

The acquisition is one of many over recent years for SoftBank — including a $15 billion investment in Vodafone’s Japanese division that helped SoftBank become the third-biggest carrier in the country — but the $3` billion pricetag is a significant marker, higher even than the $20 billion it paid for Sprint three years ago. ARM could be another good investment for Masayoshi Son: the British firm has proven that its chips work with smartphones and has existing relationships with major companies, while big competitors like Intel still struggled to produce mobile chips of the same quality.

SoftBank has endured some tough months recently. Sliding profits for Sprint forced the Japanese telecoms giant to break the American company off earlier this year, protecting its successful Japanese business. The company also separated from Nikesh Arora, a former Google executive that took the position of SoftBank’s COO and president, and appeared to be next in line to succeed CEO Masayoshi Son

  • SO
Share this
21 Jul

The Internet of Medical Things, Part 1: A New Concept in Healthcare

Manufacturers are eager to lay the bricks and help pave the road to better, more personalized healthcare through integration of connected devices in the new Internet of Medical Things.

Compared to other industries, healthcare has been “inherently conservative and slow” to embrace innovations such as the cloud and the Internet of Things — but that’s changing. Innovative tech products and services are making it harder for healthcare providers to ignore the potential benefits of connected medical devices and the IoMT, according to Ian Shakil, CEO of Augmedix.

“We’re seeing this firsthand, as major healthcare systems — including Sutter Health, Dignity Health, CHI and TriHealth — scale Augmedix’s service across their organizations with a mission to rehumanize the doctor-patient relationship,” he told TechNewsWorld.

Little Gadgets and Big Data

People aged 65 and older made up about 14.7 percent of the United States population in 2014, and that figure could reach 20 percent, according to Administration on Aging projections.

Although the over-65 population will continue to drive up healthcare costs, new support for aging in place and efforts to shorten hospital stays could alleviate that strain. However, that’s dependent on device manufacturers and software developers getting the customer experience right for both patients and providers, suggested Virtual Health CEO Adam Sabloff.

The valuable data rendered from connected medical devices, “such as wearables that monitor blood pressure and other vitals, is relatively useless if never presented to a patient’s care team in an easy to read, actionable manner,” he told TechNewsWorld. “This is where innovative end-to-end solutions come into play.”

Virtual Health maintains a cloud-based platform that aggregates and integrates data from a range of sources and formats to give care teams “a 360-degree view of the patient.” That allows physicians to “intervene and provide proactive care before a condition manifests or worsens,” said Sabloff.

“Patients will truly benefit from the IoMT only when end-to-end solutions are fully leveraged to help clinicians access and act on valuable data provided by wearables and other remote monitoring tools,” he added.

Encouraging Adoption

Mobility and data management are key to the IoMT’s success, according to Keith Cooper, CEO of Constant Therapy.

“When medical care or therapy is untethered from a face-to-face office visit, patients can receive more and consistent care on any day in any place,” he told TechNewsWorld.

Constant Therapy’s tools for brain rehab offer users five times more speech and cognitive therapy than conventional approaches, the company’s internal studies concluded.

As for data and analytics, the company’s digitization of cognitive therapy delivery has made information about a patient’s strengths and deficits “instantly available and clearly understandable.”

Deeper insights and data tracking can lift the veil on understanding, Cooper said. Complementing that are interoperability and a compelling customer experience.

Interoperability to a large degree entails building in system support for all major mobile devices, and building products that are fresh and engaging, said Michelle de Haaff, vice president of marketing and customer success at Glooko.

“Freshness includes features that enable patients to share and get new insights from their data, get remote support from a care team, and even incentives that can keep them coming back for more,” de Haaff told TechNewsWorld.

Culture Change

To drive adoption of IoMT systems and to achieve more end-to-end solutions, hospital administrators, vendors and manufacturers must cooperate to lead healthcare through a culture change, said Joel Cook, senior healthcare solutions manager at Stanley Healthcare.

Remote consultations, social sensors, gadgets that offer guidance on health and wellness, analytics-driven regimens and other IoMT innovations are driving a cultural transformation that could move modern medicine into its next era.

“The keys to success are a clearly defined use case,” Cook told TechNewsWorld, “and goals that are agreed upon by all the stakeholders.”

Share this
20 Jul

The Internet of Medical Things, Part 2: Devices and Apps

Concerns have been multiplying in the United States over the creaks of a medical infrastructure that’s bearing the weight of an aging population. Current and upcoming members of the over-65 club are helping to drive a new age in healthcare, in which devices connect patients with caregivers, things get even smarter, and machines get even better at learning what sensors are saying.

The swelling over-65 segment of the population, which accounts for about 63 percent of the U.S.’ long-term healthcare services, is expected to drive the demand for new healthcare technologies, according to a recent report from Grand View Research.

The “high, unmet medical needs” of seniors is one factor that has pushed healthcare organizations to adopt and expand on remote patient monitoring and telemedicine.

Efforts to meet those needs will sustain the market for the Internet of Medical Things until at least 2022 — at which point the sector is expected to near $410 billion, according to the research firm.

Leading the sector’s growth are its medical devices, which are projected to drive a compound annual growth rate of 28.3 percent between now and then.

All Shapes and Sizes

They’ve been popping up on wrists in the form of smartwatches like the Apple Watch and connected fitness bands like Fitbit.

Google plans to put them in people’s eyes, employing a contact-lens form factor to read glucose levels.

mc10 has developed a biometric stamp, about the size of a medium band-aid, that reports a user’s vitals to connected devices and doctors.

Meanwhile, researchers at the University of Buffalo are working on a pendant, worn on a choker-type necklace, that can analyze chewing and swallowing sounds to determine what wearers are eating and alert them when they’ve consumed too many carbs.

This is the face of a smarter, more connected manifestation of modern medicine. Still, the devices patients and consumers use are only one side of the coin, noted Ian Shakil, CEO of Augmedix.

“The Internet of Medical Things will continue to evolve and deliver value by getting the complexities of technology out of the way and connecting — or better yet, reconnecting — doctors with patients,” he told TechNewsWorld.

One of the original partners for Google’s Glass at Work enterprise initiative, Augmedix has focused on the right side of the coin. Google shuttered the consumer side of its Glass program, focusing instead on the enterprise and on companies that develop augmented reality experiences for other organizations.

Augmedix’s Glass-based service frees the hands of doctors to spend more of their time working with patients and less of it looking at documents.

“Augmedix’s service is optimally designed to deliver the highest quality patient notes resulting in improved patient care and pay or reimbursement,” said Shakil.

The Eyes Have It

Before using the service, physicians would spend 70 percent on their time on documentation, the company estimated.

“This newly reclaimed time can be repurposed in-clinic for other administrative tasks or even to see more patients,” Shakil said.

While Augmedix has been working to free doctors from the bog of documentation, care facilities long since have adopted technologies such as real-time location systems.

RTLS is one of the original use cases for the IoMT and predates the term, according to Joel Cook, senior healthcare solutions manager at Stanley Healthcare.

Such systems traditionally have been leveraged for asset management, such as locating equipment and monitoring for temperature-sensitive substances, he told TechNewsWorld. Pulling from the opposite direction, hospitals now are focusing on improving the patient experience, along with tuning the efficiency of staff operations.

“Using wireless RTLS devices, staff and family members can trace a patient’s entire journey through the hospital — from the waiting room to surgery to post-op, and how long they spent in each area,” Cook said.

Data, Big and Small

Along with mobility, which is afforded by wearables and IoMT devices, data and analytics are key factors in modernizing medicine to meet demand, according to Keith Cooper, CEO of Constant Therapy.

At Constant Therapy, developers and researchers have been building out and maintaining a platform that evaluates patients with the goal of improving brain function. Patients input data by performing tasks on smart devices, and the software makes individual assessments and recommendations.

“By collecting anonymized data from every person’s interaction with our therapy programs, we can analyze what works and what doesn’t work for each type of individual,” Cooper told TechNewsWorld.

Constant Therapy uses more than 200 million data points to “deliver precise and personalized” therapy with science as its basis, he said, noting that “this is exactly what is envisioned by personalized medicine.”

Meanwhile, Glooko software engineers have built a platform that connects users and their diabetes devices to caregivers and the company’s analytics platform.

The service supports more than 50 cloud-connected medical devices, from continuous glucose monitors to insulin pumps, along with a host of mobile phones and tablets to help people better understand the impact their behavior has on their diabetes, explained Michelle de Haaff, vice president of marketing and customer success.

By “connecting these devices to mobile apps that can provide on-demand insights, reminders and notifications,” de Haaff said, the IoMT “can increase adherence rates to taking medications, improve nutrition and exercise commitments, and deliver the data needed for remote patient monitoring.”

Share this
19 Jul

Nest Cam Braves the Great Outdoors

Nest on Wednesday introduced an outdoor surveillance camera to complement its indoor Nest Cam.

The Nest Cam Outdoor will go on sale this fall for US$199. It is easy to mount on any outdoor surface, according to the company, and even has a magnetic base for attaching to gutters.

The camera, which resembles a unit of track lighting, can capture 1080p video. It has live-streaming capabilities, as well as two-way audio. It has a 130-degree field of view and includes night vision support.

Power to the WiFi camera is provided through a 25-foot cable.

“Nest Cam Outdoor is one of the first, if not the first, security camera to offer features such as Talk and Listen and person alerts,” said Nest Product Manager Mehul Nariyawala.

“It’s the first to offer 24/7 cloud recording — others only offer DVR-based 24/7 recording, which is hard to access when you’re away from home and easy for burglars to steal,” he pointed out.

“It also has the first-ever IP67 rated outdoor adapter, so you don’t have to drill a hole in the wall to install your camera,” Nariyawala told TechNewsWorld.

Camera That Cried Wolf

The outdoor camera uses the same Nest Aware software as its indoor counterpart. It sends alerts of motion and sounds within camera range to users’ smartphones. It also stores video captured by Nest cameras in the cloud — for a fee. Ten days of video history can be stored for $10 a month and 30 days for $30 a month.

With the launch of Nest Cam Outdoor, the Nest Alert software will be improved to allow the system to distinguish between people and other objects in the camera’s view.

“The people-recognition thing is a huge issue, beause I get so many false negatives,” said Bob O’Donnell, chief analyst with Technalysis Research, who owns two of Nest’s indoor cams.

“It becomes like the boy who cried wolf,” he told TechNewsWorld. “I just started ignoring the warnings.”

Nest Cam Outdoor isn’t an exciting product, but it’s a necessary product because the company needs an outdoor camera to be competitive, O’Donnell explained. “There’s no glamour around this product. Anyone could have made it.”

Power Problems

Failure to do more to differentiate the Nest Cam Outdoor from others in the market is a stunning miscalculation, suggested Patrick Moorhead, principal analyst at Moor Insights and Strategy.

“It brings little, if anything, new to the table,” he told TechNewsWorld. “Yet it has many downsides.”

Among the downsides are no solar or battery power support; no support for Apple HomeKit; and no ability to zoom, pan and swivel, Moorhead noted.

The absence of battery power could be a real deal breaker for some consumers, suggested Kevin Krewell, a principal analyst at Tirias Research.

“Think about it. Where are the power outlets on the outside of the house? They’re near the ground where anyone can disconnect the power to the camera,” he told TechNewsWorld.

“They didn’t change the basic functionality of the indoor Nest camera — they just waterproofed it,” Krewell said.

There are some fundamental flaws with battery-powered security cameras, maintained Nest’s Nariyawala.

“They’re easy to install but easy to steal because they’re wireless,” he argued. “They also only start streaming when they detect motion, providing unreliable and limited streaming to save power,” he added.

“In fact, they remain off the majority of the time to conserve power,” Nariyawala pointed out. “A true security camera needs to be standing guard 24/7 — and watch, hear and record everything so you get the full picture.”

Ecosystem Key to Success

On its own, the camera is only a small part of the attraction for consumers, suggested Jonathan Gaw, a research manager at IDC.

“The device itself induces a hard shrug. It’s the Web services behind the devices — the intelligence — that count,” he told TechNewsWorld.

“Nest hopes the intelligence of its cloud services, such as Nest Aware, combined with an ecosystem of connected devices working together will close the deal for consumers,” said Gaw.

The ecosystem is indeed the thing, emphasized Charles King, principal analyst at Pund-IT.

“It gives owners access to value-added features,” he told TechNewsWorld, “including alerts to their smartphones, the ability to speak and listen through the cam via a phone, and automatic storage of video in the Google cloud.”

However, Nest’s cloud storage pricing structure — annual rates of $100 for 10-day video histories and $300 for 30-day ones — may be a turnoff for consumers, he remarked.

“Other cams, like the Netgear Arlo, offer far cheaper cloud storage options,” King pointed out.

While an outdoor version of Nest’s indoor camera makes sense, its consumer appeal is an unknown.

“It’s hard to say how many homeowners are actively looking for a product like this,” King said. “Other applications, such as adapting the Nest Cam Outdoor for business or industrial use, might be more lucrative for Nest.”

Share this
18 Jul

Banana Pi gets upgraded with 64-bit SoC to counter Raspberry Pi 3

To rival the latest Raspberry Pi model, Chinese board maker Sinovoip has launched a new version of its Banana Pi. Called the Banana Pi BPI-M64, the new single-board computer features a 64-bit Allwinner A64 system-on-chip (SoC) with up to 2GB of RAM.

The Banana Pi BPI-M64 has four cores of Cortex-A53 at a clock speed of 1.2GHz. The board also includes a dual-core Mali 400 MP2 GPU and comes equipped with a microSDXC card slot that expands the built-in 8GB eMMC flash storage.

Similar to the Raspberry Pi 3, the new Banana Pi model has Wi-Fi 802.11 b/g/n and Bluetooth 4.0 connectivity support. There is a GPIO header which is pin-compatible with Raspberry Pi and two HDMI ports to support up to 4K video playback. Furthermore, the board has three USB 2.0 and a microUSB port with USB OTG support.

BananaPi-BPI-M64

For producing content on a large screen, the 64-bit Banana Pi has DSI display interface support. Additionally, there is CSI camera interface to enable image and video production directly through the credit card-sized computer.

The Banana Pi BPI-M64 can run a variety of operating systems, including Android, Debian Linux, Ubuntu and Raspbian. However, there is no confirmation whether the new board would run Microsoft’s Windows 10 IoT core that already supports the newest Raspberry Pi version.

Sinovoip has not revealed the price of the new Banana Pi. But it is likely to be somewhere around the $35 price of the Raspberry Pi 3.

Banana Pi-H|W

Share this
10 Jun

On technology-assisted longevity, or: The singularity will not be televised

singularity

Over the past 10 days or so, we’ve highlighted multiple advances in medicine and technology that could change medical care as we know it in the not-too-distant future. For our final deep dive on the topic (for now), we’re going to address one of the most difficult questions of all, a difficulty that mankind has been struggling to overcome at least since Qin Shi Huang died from ingesting mercury in a bid to make himself immortal: Human longevity. And while immortality remains firmly outside our grasp, we’re using technology to improve human longevity today, and have been since the adoption of fire and simple machines. Using medical technology, we can attack the problems limiting human lifespan.

What kind of technology has (or could have) direct lifespan extension capabilities?

In this instance, we’re defining direct lifespan extension capabilities as those things that specifically counteract aging or causes of death. Ultimately, there are both internal and external factors that cause cells and organisms to age and die. External things like UV radiation (Baz Luhrmann — wear sunscreen!) cause DNA damage which can trigger a cell’s self-destruct mechanisms. Within and between cells, the molecular flotsam and jetsam of life build up like crumbs in the cupholders of a car, causing a condition called senescence. Senolytics, which are substances that unclog the works by selectively inducing the death of senescent cells, can address that problem. Senolytics have been shown to extend healthy adulthood in mice, but as usual, it’s not clear yet whether that will translate to humans.

Telomerase is another likely player in the fight against aging. Telomeres are regions of disposable DNA that cap off the ends of chromosomes, preventing the sticky ends of DNA from attaching somewhere they shouldn’t. During life, repeated handling can shorten telomeres, and when telomeres get too short, DNA replication gets buggy, which can also lead to cell death. Telomerase is an enzyme that prevents this degradation, sort of like the protective border around a Polaroid. Getting the benefit out of telomerase isn’t as simple as taking telomerase pills to live longer, but we don’t know exactly why yet.

Telomeres

Telomeres (artist depiction)

In fact, the whole idea of senolytics is that we can get finer control over the cell cycle, so that we can tell cells when to die and when not to. It’s much less a statement about any single therapy than it is a statement about the relative level of scientific advancement needed to make these biotechnologies work.

Then there are more indirect forms of technological life extension. We’re defining “indirect” life extension as technologies that prevent unnecessary deaths, extend valuable years, or reduce stress that leads to aging and infirmity. Falling into this category are things like smart houses — alert and first aid systems that can prevent deaths and make daily living easier. But there are also inventions like the exoskeletons we’ve covered recently and even materials-science innovations like, say, an implant or indwelling device made from a highly hydro- and oleophobic material that can repel biofilm adhesion.

Personalized health care, in particular BayMax-like personal health care assistants, also represent a coming sea change in how we age. To the extent to which AI is actually better than human doctors, it will take over the practice of medicine, no matter how unflattering it may feel at the time.

“Dietary supplements” and megadosing, though, are going the way of the superfood craze. Like every other dietary fad, nutraceuticals are of limited use outside the narrow set of conditions where they’ve been found to work. If you’ve read about the genuinely dismal state of the dietary supplement industry, you may be familiar with this problem. When Ray Kurzweil spoke at RIT in 2008, he had with him a gallon-size Ziploc that he said held the several cups of pills he claims to take per day. Kurzweil is among others hoping to hedge their bets and extend their lives with liberally applied nutraceutical-grade snake oil, proving that you can be smart, well-intentioned, well-informed, and still just plain wrong. The best argument in favor of nutraceuticals is “Hey, it can’t hurt.” There is no dietary magic bullet for life extension, and if you’re trying the “spray and pray” method like Kurzweil, you’ve already lost.

This brings us to the singularity.

The general idea of a technological singularity is a future where technology has advanced so far that it outstrips the ability of humans to understand it. There are, more or less, two ways the scenario can play out: in favor of humans, or in favor of the AI. Kurzweil and his singularitarians tend to believe that if humans can ride the wave of advancing technology, forming a harmonious relationship with it, we can enhance our own intelligence to the point of transcending our physiological bounds. In certain possible futures, this could mean directly augmenting the body and brain with cybernetics in order to improve or extend function or lifespan.

If — and this is a big if — we had sufficient computing prowess and sufficiently advanced understanding of the crossroads between body and mind, we could even budge up against mind uploading. Setting aside the difficulties of storage media and whether or not the Matrix is a good idea, mind uploading could neatly circumvent mortality by representing your consciousness as a lossless set of nodes and waveforms and their associated Boolean logic, which could be easily transmitted between systems, like a PDF.

Gordon Moore, of Moore’s Law, actually argued against the idea of a technological singularity. He wasn’t convinced that the miniaturization of transistors could make them suited to modeling the complexity of the human brain. Kurzweil’s vision of a technological singularity can be understood differently, by looking at something Asimov wrote called “The relativity of wrong.” The general gist of the piece is that, no matter how wrong we are about things now, we’ll never be that wrong again. Another way of saying it is that progress builds upon itself; we don’t unlearn innovations we’ve made, which means that the pace of history inevitably accelerates through time. Every new idea comes into being in the presence of all the old ones, which means that the pool of information never shrinks.

So, while we’re running around chasing longevity, how do we make sure we get more Guinan and less Lady Cassandra?

Cassandra_closeup_The_End_of_the_World

Moisturize me! (Credit: Doctor Who Wiki)

My personal feeling here is that sure, we’ll get a set of interventions and therapies that can cut down on the vast majority of things that cause aging: the first five sigmas, as it were. But as we do that, things will likely become progressively weirder. Gene products don’t always do just the one thing; the phenomenon is called pleiotropy, whereby one segment of DNA does more than one important but unrelated thing. Crystallins are a good example of this — when expressed at low levels in body tissues, they act as enzymes, but when expressed at high levels in the eye, instead of being enzymatically active, they pack closely and form lenses. Meddling with a gene product is going to be a dicey proposition at best.

The bottom line is that humanity’s use of technology has the potential to be at least as constructive as it can be destructive, and navigating the line between those states will require a degree of knowledge orders of magnitude beyond what we have now. In theory, it should be possible to build supercomputers with enough raw processing power to simulate the human brain within a decade. Even if this occurs, however, it’s only the first step. The challenge from that point will be how to simulate a human brain, and to discover whether or not said simulation is capable of anything like human learning or cognitive processes. The question of sapience and consciousness and our ethical obligations in the event that these should occur is an entirely different can of worms.

The singularity, when and if it occurs, is unlikely to be anything like we’ve imagined, and the gulf between where we are today and where we’d need to be is too enormous to speculate on either its timeline or its properties. There are still too many things we know we don’t know and undoubtedly many that we don’t know we don’t know — to speculate on what form the singularity might assume. Mice trials and gene therapy are a long way from the quantum leaps Kurzweil believes the human race will take in the not-too-distant future. But they’re the only way to ensure that the future he and others want to create is worth living in, rather than the cyber dystopia that could occur if appropriate ethical guidelines and safeguards are not observed.

 

Share this
07 Jun

Developer confirms Xbox One VR is coming, but how powerful is the console likely to be?

XboxOneFeature

An anonymous European developer has confirmed that a major European studio is working on a VR title for the Xbox One upgrade currently expected in 2017. Microsoft is expected to announce the new console at E3 this year, and the new title will likely debut there as well.

Microsoft intends to work with Oculus Rift to support VR on the Xbox One rather than building its own in-house solution, according to Ars Technica. At this stage in the game that’s probably wise; VR headsets historically take several years to develop and the new Xbox One (Xbox 1.5? Xbox Two? Xbone Xtreme?) will already be hitting markets behind PlayStation VR and the PlayStation 4K / 4.5 / Neo. Unlike Sony, which has mandated strict backwards compatibility requirements, Xbox One VR may only be available on the 2017 model due to VR’s high performance demands. This makes sense, given that the Xbox One typically lags the PlayStation in performance tests, even though its games typically run at a lower resolution.

How will Microsoft upgrade the Xbox One?

If rumors are true, Sony’s PlayStation Neo is a straightforward update with an improved GPU architecture, a faster CPU clock, and faster GDDR5 memory. Microsoft, in contrast, may have to make some tough choices about the future of its platform.

The Xbox One’s current design combines four DDR3-2133 channels with a 32MB SRAM cache. Main memory bandwidth is 68GB/s, while the SRAM cache provides 109GB/s. The SRAM cache is dedicated entirely to the GPU, as discussed in this Microsoft presentation, and its designed to alleviate pressure on main memory. Some developers have complained that the SRAM is too small to allow for 1080p; a faster, larger SRAM cache could prove extremely beneficial to the platform.

Xbox One SoC block diagram

Xbox One SoC block diagram. The 32MB of ESRAM (four blocks of 8MB) is at the bottom.

According to the presentation linked above, the CPU on the Xbox One is incapable of saturating the quad-channel DDR3-2133 memory bus, while the GPU can easily saturate it. This causes significant problems for the CPU clusters, which is why Microsoft recommends that the GPU use SRAM as much as possible.

ESRAM

If Microsoft wants the next generation of Xbox One to run VR titles, it’s going to need a much larger pipe for the GPU. Theoretically the company could redesign its main memory bus and upgrade it to DDR4-3200, but that only works if it can also ensure the CPU doesn’t become memory starved. DDR4-3200 only offers ~102.3GB/s of memory bandwidth. While that’s 50% more than the current Xbox One, it’s not nearly enough to feed a modern midrange GPU, which is what we expect an updated Xbox One to target.

Microsoft has two basic choices: It could redesign the console to use GDDR5 or HBM, thereby solving the bandwidth problem, or it can double down its SRAM cache. The second choice would seem to preserve backward compatibility more easily, so it’ll be interesting to see where Microsoft goes from here.

 

Share this
04 Jun

Broadwell-E puts 10 cores on tap: Intel launches the Core i7-6950X at Computex 2016

BDW-EFeature

Ever since Intel debuted the “E” family of hardware with Sandy Bridge-E in 2011, these processors have formed the backbone of Intel’s top-tier enthusiast offerings. While the number of cores at the top of Intel’s regular consumer products has stayed steady since 2009’s Nehalem — four cores with Hyper-Threading — the company has steadily increased the number of CPU cores it offered in the “E” family, from six with Sandy and Ivy Bridge, up to eight with Haswell-E.

The problem with Haswell-E, however, was that its core counts came at the cost of lower overall speeds compared to the Core i7-4970K. This was especially true with the eight-core version of the chip, which clocked in at 3GHz base, 3.5GHz Turbo. In these cases, a Core i7-5960X wasn’t always faster than the much cheaper 4970K, despite having more cores.

Broadwell-E

The Broadwell-E die. Ten cores, 25MB of shared L3 cache.

Intel has made a variety of changes to Broadwell-E that it hopes will alleviate some of these discrepencies and create a more varied product family. Let’s kick off at the high level and work our way down.

Broadwell-E-1

The top-end Broadwell-E is a 10-core CPU with 25MB of L3 cache (2.5MB per core). Intel holds this ratio constant in its E-class processors — eight-core chips have 20MB of cache, while six-core chips have 15MB. This new CPU is a drop-in replacement on the X99 chipset, so if you already have one of these boards for Haswell-E, Broadwell-E should be a drop-in replacement with the appropriate BIOS flash. So let’s look at the chips themselves:

BDW-EPricing

The 6950X is Intel’s 10-core CPU, but the price tag on this chip is going to raise some serious eyebrows. At $1723, Intel is charging 1.58x more for a 10-core CPU than an eight-core chip. The problem is, you’re only adding two more cores and four more threads. Paying 60% more cash for 33% more cores isn’t all that appealing, even if it’s technically impressive that Intel managed to add two cores while keeping clock rates the same and without increasing CPU TDP.

The Core i7-6900K carries a $1089 price tag and the same core count while increasing clock speeds slightly (up roughly 5%). The lower-end SKUs are still six-core parts and only see 100MHz increases. This isn’t exactly the enhancements that dreams are made of, though we want to note that the $412 price on the 6800K is actually quite good compared to the 6700K’s $327, provided you know you’ll use the additional cores.

In lieu of raw performance improvements, Intel is counting on some significant updates to its overclocking feature set to win over enthusiast hearts and minds.

Overclocking

Per-core overclocking is a nice touch, since it offers enthusiasts the option to test each core individually, determine which frequencies work best for which CPU cores, and then set them accordingly, with individual voltage curves. The one potentially substantial caveat to this is that how a core behaves alone and how it behaves when it’s loaded alongside other cores in the system could potentially be quite different. Nonetheless, per-core frequency targets could give overclockers some intriguing tuning capabilities. Let’s say, for example, that you determine four of your 10 CPU cores are capable of hitting 4GHz, six can reach 3.7GHz, eight are capable of 3.6GHz, and all 10 can run at 3.5GHz. Congratulations — assuming you’ve got the cooling to handle it, your 10-core chip is now running 14-16% over stock on a voltage and frequency curve you can work with and define yourself.

It’s not clear exactly what VccU stands for, but the AVX ratio offset is related to a feature Intel has offered on its Xeon processors for several years. If you’ve paid attention to Intel’s long-term FLOPS scaling, you’re aware that Intel doubled the number of FLOPS it could perform per clock when it launched AVX, then doubled that figure again with AVX2.

FLOPS-Comp

FLOPS per clock rating on Intel CPUs.

These continual doublings don’t happen for free, however — the 256-bit AVX2 registers draw more power, which lowers the maximum frequency Intel can support. Starting with the Xeon E5v3 family, Intel began setting lower maximum frequencies for its CPUs when they were executing sustained AVX2 workloads. It’s going to give enthusiasts the option to set those offsets manually, defining different throttle points or possibly eliminating them altogether if you can handle the heat the CPU is kicking out (note that Intel’s thermal trip protections will continue to function normally). Intel is also claiming that its Turbo Max 3.0 feature can deliver up to a 15% improvement compared to the Core i7-5960X’s method of regulating clock speed; we’ll have to confirm that in benchmark testing (our test motherboards didn’t even arrive until Friday, which is why we don’t have a review ready to roll for you fine folks).

Platform improvements

There are two more aspects to the launch that we want to cover. First, Broadwell-E does bump up formal support for faster DDR4, up to DDR4-2400 instead of DDR4-2133. In practice, DDR4-3200 is already available, and we suspect many enthusiasts will opt for this instead, but if you care about sticking to Intel’s spec, well, things are a bit faster.

Secondly, Intel is now working with motherboard vendors to build Thunderbolt 3 support into specific motherboards that are rated for the feature, as shown below:

TBolt3

Thunderbolt 3 has been shipping on more systems than its predecessor thanks to the decision to swap to the USB Type-C connector. If you’ve got a mixed workstation environment with both Apple and PC hardware, adding Thunderbolt support to the PC side of the equation is a useful capability — and Thunderbolt 3 is significantly faster than Thunderbolt 2, with more flexibility and raw throughput.

Early thoughts

If Intel can deliver significant clock speed improvements through Turbo Boost Max 3.0, it may boost its overall CPU performance by a larger margin than the raw clock speed figures listed here. We frankly hope this is the case, as the general argument for buying a Broadwell-E over Haswell-E isn’t very strong.

There are two points to be made here: First, while there’s absolutely no evidence that Intel is sitting on extra performance it doesn’t want to unlock, it is fair to note that Intel has faced absolutely no competition in the high-end space since it launched the Core i7 family eight years ago. Intel’s modern product stack is priced by core count more than clock speed, and it hasn’t stretched itself to push core counts higher in the consumer market. From 2003 – 2006, Intel moved from one core to four. Six-core chips didn’t debut for another four years, and eight-core CPUs took four years after that. If Intel had been under genuine competitive pressure, it would’ve rolled those improvements more quickly than it did, and for a much lower price.

But — and this is critical — pushing higher CPU core counts into the market doesn’t mean that software will magically materialize to take advantage of those cores. According to Steam’s hardware survey, 47.12% of users are still on dual-core CPUs, while 45.86% have quad-core chips. The overwhelming majority of consumer software is still quad-threaded or less.

Now, it’s possible that the advent of DX12 will usher in an era where higher-end multi-core CPUs will prove themselves, AMD’s Zen will offer stronger competition for Intel, and high-end desktop users will rejoice to discover their CPU investments in gaming are finally paying off. That’s a pretty big jump to make from where we are to where we’d need to be, however, and it normally takes the industry 3-4 years to make a move that significant, especially when it involves API updates and fundamental engine overhauls.

If you’re a workstation user who wants a 10-core CPU at a higher clock for less money than an equivalent Xeon might cost, then the Core i7-6950X is exactly what you’re looking for. Most gamers are still going to better served by the Core i7-6700K, though those of you who blend workstation and gaming workloads might want to take a look at the Core i7-6800K — while you trade off some clock speed, you still get 50% more cores for 26% more cost. It’s a net gain if you use workloads that can benefit. And if you’re using older hardware, like a Westmere or Sandy Bridge-E system, then the option to step up to a 10-core rig may also be extremely attractive. Ordinary gamers and users, we suspect, will do best with a 6700K or possibly something from the Kaby Lake refresh Intel is expected to launch this year.

 

Share this
03 Jun

Microsoft, Facebook team up to construct world’s fastest transatlantic cable

Microsoft headquarters

Microsoft and Facebook have jointly announced their plan to build a new undersea cable connecting the United States to Southern Europe. The cable, dubbed Marea, will link Virginia Beach to Bilbao, an autonomous Basque community located in northern Spain. The two companies expect to break ground on the cable in August 2016, with completion estimated by October 2017.

The new 4,101 mile (6,600 km) cable is expected to be the fastest undersea cable ever constructed, with eight fiber pairs and an estimated data capacity of 160Tbps (20TB/s). The cable’s route is deliberately different from conventional transatlantic cables, which typically terminate in New York or New Jersey. Microsoft claims that this routing decision will help ensure consistent and reliable connections, presumably because it can serve as a backup if something goes wrong with the current transatlantic cables.

Cloud-driven cable

The reason Microsoft and Facebook are teaming up in the first place is because they want to ensure robust infrastructure is in place to take advantage of their cloud computing initiatives. Microsoft’s blog post states:

We’re seeing an ever-increasing customer demand for high speed, reliable connections for Microsoft cloud services, including Bing, Office 365, Skype, Xbox Live, and Microsoft Azure. As the world continues to move towards a future based on cloud computing, Microsoft is committed to building out the unprecedented level of global infrastructure required to support ever faster and even more resilient connections to our cloud services. This robust, global infrastructure will enable customers to more quickly and reliably store, manage, transmit and access their data in the Microsoft Cloud.

Later in the post the company claims that Marea is designed to interoperate with a variety of network equipment, thanks to an open design process that both companies believe will lower costs and make it easier to integrate upgrades to the cable as faster transmission standards and improvements in optical data transmission become available. Whether or not this is true is a matter of some debate. While the Open Compute Project that Facebook started several years ago has reportedly save certain companies a great deal of money, it hasn’t done a great job of meeting the needs of enterprise IT departments. In this case, however, Microsoft and Facebook jointly own the cable and its hardware, which should simplify tech roll-outs and adoption.

MAREA-Cable

Presumably all data transmitted over the cable will be encrypted to prevent the NSA from wiretapping and intercepting information, but the blog post doesn’t specifically mention this. Major companies like Facebook, Microsoft, and Google have all been taking steps to reduce the chance of NSA spying — foreign companies and individuals are much less likely to work with a vendor if they believe the company collaborates with the US government to monitor foreign citizens or corporations.

 

Share this

© 2015-2020 COBRA Softwares Pvt Ltd. All rights reserved.