Category Archives: Computer

Faster technique to remotely operate robots

The traditional interface for remotely operating robots works just fine for roboticists. They use a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.

But for someone who isn’t an expert, the ring-and-arrow system is cumbersome and error-prone. It’s not ideal, for example, for older people trying to control assistive robots at home.

A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn’t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.

“Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we’ve shortened the process to just two clicks,” said Sonia Chernova, the Georgia Tech assistant professor in robotics who advised the research effort.

Her team tested college students on both systems, and found that the point-and-click method resulted in significantly fewer errors, allowing participants to perform tasks more quickly and reliably than using the traditional method.

“Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them,” said David Kent, the Georgia Tech Ph.D. robotics student who led the project. “Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That’s much easier.”

The traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. This technique makes no use of scene information, giving operators a maximum level of control and flexibility. But this freedom and the size of the workspace can become a burden and increase the number of errors.

The point-and-click format doesn’t include 3-D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot’s perception algorithm analyzes the object’s 3-D surface geometry to determine where the gripper should be placed. It’s similar to what we do when we put our fingers in the correct locations to grab something. The computer then suggests a few grasps. The user decides, putting the robot to work.

“The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can’t see, such as the back of a bottle,” said Chernova. “Our brains do this on their own — we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot’s ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up.”

By analyzing data and recommending where to place the gripper, the burden shifts from the user to the algorithm, which reduces mistakes. During a study, college students performed a task about two minutes faster using the new method vs. the traditional interface. The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique.

SMS texting could help cut internet energy use

Researchers looking more closely than ever before into everyday mobile device habits — and in particular the impact smartphone and tablet apps have on data demand — are suggesting ways that society can cut back on its digital energy consumption.

European smartphone data growth is relentless, with data traffic predicted to rise from 1.2GB to 6.5GB a month per person. Although precise energy estimates are difficult, and depend on the service and network conditions, for video streaming each gigabyte of data can be estimated to consume 200 watt-hours of energy through Internet infrastructure and datacentres.

Following a detailed study on Android device users, and comparing observations with a large dataset of almost 400 UK and Ireland mobile devices, computer scientists at Lancaster University and the University of Cambridge identified four categories of data-hungry services — watching video, social networking, communications and listening. These four categories equate to around half of mobile data demand.

Watching videos (21 per cent of daily aggregate mobile data demand) and listening to music (11 per cent) are identified as the two most data-intensive activities. They are popular during the peak electricity demand hours of between 4pm and 8pm (when carbon emissions due to generation on UK National Grid are highest), but watching is particularly popular in the late hours before bedtime. People make their workdays more enjoyable by streaming music, with listening demand peaking during commuting hours and also at lunchtime.

The researchers recommend designers look at creating features for devices or apps that encourage people to gather together with friends and family to enjoy streamed media — reducing the overall number of data streams and downloads.

Kelly Widdicks, PhD student at Lancaster University’s School of Computing and Communications, said: “To reduce energy consumption, app designers could look at ways to coordinate people to enjoy programmes together with friends and family, listening to locally-stored or cached music, and developing special celebratory times — weekly or monthly — to more fully appreciate streamed media, rather than binge watching.

“But at least equally important is the role of service and content providers. Our studies show significant evidence that automatic queueing of additional video, and unprompted loading of selected content leads to more streaming than might otherwise have happened.”

Social networking also causes large demands for data and the researchers suggest systems architects re-evaluate the social importance and meaning of videos streamed over these platforms, alongside the energy required.

“Media, and therefore data demand, is embedded into social networking apps. We propose that this dogma of all-you-can-eat data should be challenged, for example by reducing previews or making people click through to view content that interests them,” said Dr Oliver Bates, senior researcher at Lancaster University’s School of Computing and Computing and Communications. “This may dissuade people from simply viewing media just because it is easily accessible.

“Our participants indicated there was often little meaning or utility to the automated picture feeds, and video adverts common to many social media apps.”

Figures obtained through the study indicate energy consumption through instant messaging apps’ data demand is around ten times higher than that of SMS. If people were to default back to sending messages via SMS rather than instant messaging services, it would help to reduce data and energy consumption further. However, the researchers point out that to make an SMS-like service practical again, more of the features of instant messages would need to be adopted.

“By using SMS for simple text messaging, or a low-overhead instant messaging service (such as one which sends images at lower resolutions), there is good potential to decrease energy consumption from communications,” said Miss Widdicks.

“Mobile service providers and device designers can make it more convenient for people to switch between communication methods. SMS and MMS services could be revised to better suit the phone user today, such as by sending photos at a lower cost to the subscriber, catering better for group messages and by informing users that their sent messages have been received — all reasons why people use instant messaging apps,” she added.

How to catch a phisher

Computer science professors Rakesh Verma, Arjun Mukherjee, Omprakash Gnawali and doctoral student Shahryar Baki used publicly available emails from Hillary Clinton and Sarah Palin as they looked at the characteristics of phishing emails and traits of the email users to determine what factors contribute to successful attacks. The team used natural language generation — a process used to replicate human language patterns — to create fake phishing emails from real emails. It’s a tactic used by hackers to execute “masquerade attacks,” where they pretend to be authorized users of a system by replicating the writing styles of the compromised account.

Using the Clinton and Palin emails, the research team created fake emails and planted certain signals, such as fake names, repetitive sentences and “incoherent flow.” Study participants were then given eight Clinton emails and eight Palin emails — four were real, four were fake. Volunteers were asked to identify which emails were real and explain their reasoning. The study took into account the reading levels of the Clinton and Palin emails as well as the personality traits, confidence levels and demographics of the 34 volunteers who participated.

The results of the study showed that:

  • Participants could not detect the real emails with any degree of confidence. They had a 52 percent overall accuracy rate.
  • Using more complex grammar resulted in fooling 74 percent of participants.
  • 17 percent of participants could not identify any of the signals that were inserted in the impersonated emails.
  • Younger participants did better in detecting real emails.
  • Only 50 percent of the participants mentioned the fake names.
  • Only six participants could show the full header of an email.
  • Education, experience with emails usage and gender did not make a difference in the ability to detect the deceptive emails.

“Our study offers ideas on how to improve IT training,” Verma said. “You can also generate these emails and then subject the phishing detectors to those kind of emails as a way to improve the detectors’ ability to identify new attacks.”

In the case of the recent Google Docs attack, Verma says people fell for the scam because they trust Google. When users opened the given URL, they were sent to a permissions page and hackers got control of their emails, contacts and potentially their personal information. Google stopped the scam, removed the fake pages and disabled offending accounts. Verma said a real Google Docs application will generally not ask for permission to access your contacts or read your emails.

The “WannaCry” ransomware attack that has hit banks, hospitals and government agencies around the globe is also spread through email phishing and can be spread through the Google Doc-type “worm” as well.

What all email users need to know in order to protect themselves:

  • Look closely at the sender of the email and the full header that has information about how the email was routed.
  • Look at the body of the email for any fake, broken links that can be identified by hovering a mouse over them.
  • Think about the context of the email and how long it has been since you have had contact with the sender.

“There will be copycat attacks in the future and we have to watch out for that,” said Verma.

Delineates breast cancers on digital tissue slides

Looking closer, the network correctly made the same determination in each individual pixel of the slide 97 percent of the time, rendering near-exact delineations of the tumors.

Compared to the analyses of four pathologists, the machine was more consistent and accurate, in many cases improving on their delineations.

In a field where time and accuracy can be critical to a patient’s long-term prognosis, the study is a step toward automating part of biopsy analysis and improving the efficiency of the process, the researchers say.

Currently, cancer is present in one in 10 biopsies ordered by physicians, but all must be analyzed by pathologists to identify the extent and volume of the disease, determine if it has spread and whether the patient has an aggressive or indolent cancer and needs chemotherapy or a less drastic treatment.

Last month, the U.S. Food and Drug Administration approved software that allows pathologists to review biopsy slides digitally to make diagnosis, rather than viewing the tissue under a microscope.

“If the network can tell which patients have cancer and which do not, this technology can serve as triage for the pathologist, freeing their time to concentrate on the cancer patients,” said Anant Madabushi, F. Alex Nason professor II of biomedical engineering at Case Western Reserve and co-author of the study detailing the network approach, published in Scientific Reports.

The study

To train the deep-learning network, the researchers downloaded 400 biopsy images from multiple hospitals. Each slide was approximately 50,000 x 50,000 pixels. The computer navigated through or rectified the inconsistencies of different scanners, staining processes and protocols used by each site, to identify features in cancer versus the rest of the tissue.

The researchers then presented the network with 200 images from The Cancer Genome Atlas and University Hospitals Cleveland Medical Center. The network scored 100 percent on determining the presence or absence of cancer on whole slides and nearly as high per pixel.

“The network was really good at identifying the cancers, but it will take time to get up to 20 years of practice and training of a pathologist to identify complex cases and mimics, such as adenosis,” said Madabhushi, who also directs the Center of Computational Imaging and Personalized Diagnostics at Case Western Reserve.

Network training took about two weeks, and identifying the presence and exact location of cancer in the 200 slides took about 20 to 25 minutes each.

That was done two years ago. Madabhushi suspects training now — with new computer architecture — would take less than a day, and cancer identification and delineation could be done in less than a minute per slide.

“To put this in perspective,” Madabhushi said, “the machine could do the analysis during ‘off hours,’ possibly running the analysis during the night and providing the results ready for review by the pathologist when she/he were to come into the office in the morning.”

Bitcoin to prevent identity theft

At the IEEE Symposium on Security and Privacy this week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are presenting a new system that uses Bitcoin’s security machinery to defend against online identity theft.

“Our paper is about using Bitcoin to prevent online services from getting away with lying,” says Alin Tomescu, a graduate student in electrical engineering and computer science and first author on the paper. “When you build systems that are distributed and send each other digital signatures, for instance, those systems can be compromised, and they can lie. They can say one thing to one person and one thing to another. And we want to prevent that.”

An attacker who hacked a public-key encryption system, for instance, might “certify” — or cryptographically assert the validity of — a false encryption key, to trick users into revealing secret information. But it couldn’t also decertify the true key without setting off alarms, so there would be two keys in circulation bearing certification from the same authority. The new system, which Tomescu developed together with his thesis advisor, Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science at MIT, defends against such “equivocation.”

Because Bitcoin is completely decentralized, the only thing ensuring its reliability is a massive public log — referred to as the blockchain — of every Bitcoin transaction conducted since the system was first introduced in 2009. Earlier systems have used the Bitcoin machinery to guard against equivocation, but for verification, they required the download of the entire blockchain, which is 110 gigabytes and growing hourly. Tomescu and Devadas’ system, by contrast, requires the download of only about 40 megabytes of data, so it could run on a smartphone.

Striking paydirt

Extending the blockchain is integral to the process of minting — or in Bitcoin terminology, “mining” — new bitcoins. The mining process is built around a mathematical function, called a one-way hash function, that takes three inputs: the last log entry in the blockchain; a new blockchain entry, in which the miner awards him- or herself a fixed number of new bitcoins (currently 12.5); and an integer. The output of the function is a string of 1s and 0s.

Mining consists of trying to find a value for the input integer that results in an output string with a prescribed number of leading 0s — currently about 72. There’s no way to do this except to try out lots of options, and even with a huge bank of servers churning away in the cloud the process typically takes about 10 minutes. And it’s a race: Adding a new entry — or “block” — to the blockchain invalidates the most recent work of all other miners, who now have to start over using the newly added block as an input.

In addition to assigning the winning miner the latest quota of bitcoins, a new block in the blockchain also records recent transactions by Bitcoin users. Roughly 100,000 commercial vendors in the real world now accept payment in bitcoins. To verify a payment, the payer and vendor simply broadcast a record of their transaction to the Bitcoin network. Miners add the transaction to the blocks they’re working on, and when the transaction shows up in the blockchain, it’s a matter of public record.

The transaction record also has room for an 80-character text annotation. Eighty characters isn’t enough to record, say, all the public keys certified by a public-key cryptography system. But it is enough to record a cryptographic signature verifying that a certification elsewhere on the Internet is legitimate.

Previous schemes for preventing equivocation simply stored such signatures in the annotations of transaction records. Bitcoin’s existing security structure prevents tampering with the signatures.

But verifying that a Web service using those schemes wasn’t equivocating required examining every transaction in every block of the blockchain — or at least, every block added since the service first used the scheme to certify a public assertion. It’s that verification process that Tomescu and Devadas have refined.

Efficient audits

“Our idea is so simple — it’s embarrassingly simple,” Tomescu says. The central requirement of Bitcoin is that no one can spend the same bitcoin in more than one place, and the system has cryptographic protocols in place to prevent that from happening.

So Tomescu and Devadas’s system — called Catena — simply adds the requirement that every Bitcoin transaction that logs a public assertion must involve an actual bitcoin transfer. Users may simply transfer the bitcoin to themselves, but that precludes the possibility of transferring the bitcoin to anyone else in the same block of the blockchain. Consequently, it also precludes equivocation within the block.

To prevent equivocation between blocks, it’s still necessary to confirm that the bitcoin that the Catena user spends in one block is the same one that it spent the last time it made a public assertion. But again, because the ability to verify a bitcoin’s chain of custody is so central to the success of the whole Bitcoin system, this is relatively easy to do. People who want to use Catena to audit all the public assertions of a given Web service still need to download information from every block of the blockchain. But they need to download only a small cryptographic proof — about 600 bytes — for each block, rather than the block’s full megabyte of data.

The research of quantum technologies

A central concept in quantum mechanics is that of energy level. When a quantum mechanical system, such as an atom, absorbs a quantum of energy from light, it becomes excited from a lower to a higher energy level. Changing the separation between the energy levels is called frequency modulation. In quantum devices, frequency modulation is utilized in controlling interactions, inducing transitions among quantum states and engineering artificial energy structures.

“The basis of quantum mechanical frequency modulation is known since the 1930s. However, the breakthrough of various quantum technologies in 2000s has created a need for understanding and better theoretical tools of quantum systems under frequency modulation,” says Matti Silveri, presently a postdoctoral researcher from University of Oulu.

Understanding and utilization of frequency modulation is important for developing more accurate quantum devices and faster quantum gates for the near-future small scale quantum computers. The research field of quantum devices and computing is rapidly growing and it has recently attracted also investments from major technology companies, such as, from Google, Intel, IBM and Microsoft.

“We wanted to review the recent experimental and theoretical progress with various different kinds of quantum systems under frequency modulation. We hope to accelerate the research in this field,” adds docent Sorin Paraoanu from Aalto University.

The article discusses the physics of frequency modulation in superconducting quantum circuits, ultracold atoms, nitrogen-vacancy centers in diamond and nanoelectromechanical resonators. With these platforms, energy levels can be accurately modulated with voltage, microwaves or lasers in various experimental settings. The theoretical results of the article are general and can be applied to various quantum systems.

Comparing student performance

Elementary school students scored marginally higher on the computer-based exam that allowed them to go back to previous answers than on the paper-based exam, while there was no significant difference for middle school students on those two types of tests.

In contrast, high school students showed no difference in their performance on the three types of tests. Likewise, previous research has found that the option to skip, review, and change previous responses also had no effect on the test results of college students.

For the study, tests were given to students in grades 4-12 that assessed their understanding of energy through three testing systems. Instructors elected to administer either the paper-and-pencil test (PPT) or one of two computer-based tests (CBT) based on the availability of computers in their classrooms.

One CBT (using TAO, an open source online testing system) allowed students to skip items and freely move through the test, while the other CBT (using the AAAS assessment website) did not allow students to return to previous test items. In addition, on the TAO test, answers were selected by directly clicking on the text corresponding to an answer. On the AAAS exam, answers were chosen more indirectly, by clicking on a letter (A, B, C, or D) at the bottom of the screen corresponding with an answer.

Gender was found to have little influence on a student’s performance on PPT or CBT; however, students whose primary language was not English had lower performances on both CBTs compared to the PPT. The cause for the difference depending on primary language was unclear, but could have been linguistic challenges that the online environment presented or limits on opportunities to use computers in non-English-speaking environments.

Overall, the study results, along with previous research, indicate that being able to skip, review, and change previous responses could be beneficial for younger children in elementary and middle school but have no influence on older students in high school and college.

Furthermore, results indicated that marking an answer in a different location on a multiple-choice test could be challenging for younger students, students with poor organizational skills, students who have difficulties with concentration, or students who are physically impaired. In addition, having to match an answer to a corresponding letter at the bottom of the screen likely adds an additional level of complexity and cognitive processing.

The researchers note that additional study of CBT answer-choice selection and test navigation features and how they influence elementary and middle school students’ test performance is warranted.

The study was supported by a grant from the Institute of Education Sciences.

Electronic healthcare systems

Information security and protection of privacy are some of the most important factors in the development of high-quality tools in the healthcare sector. If no attention is paid to these aspects, there is substantial risk that individuals may come to harm in healthcare situations. Leonardo Iwaya, PhD student in computer science at Karlstad University, explores ways of securing information and protecting privacy when using mobile applications in healthcare (mHealth).

“Mobile apps are for example used in developing countries to increase the coverage and the access to public healthcare,” says Leonardo Iwaya. “But many projects fail because issues related to data security and privacy cannot be successfully integrated in the systems.”

For instance, in Brazil, mHealth tools have been used by community health workers to improve patients’ treatment in poor and rural areas, strengthening the link between the society and the public health system. These patients often have limited possibilities to visit healthcare clinics and the project instead involved healthcare workers visiting patients at home. Smartphones are, for example, used to streamline the handling of journals. Information gathered during a visit is also used to analyse the impact of the conditions in the specific areas on people’s health, so that more prevention work can be done.

“My part in the project has been to look at how systems are designed and data is processed with respect to data protection and privacy,” says Leonardo Iwaya. “These issues have to be considered from the start if you want to develop digital healthcare systems in which information is properly secured and privacy is protected.”

Spans drop when online ads pop up

Rejer and Jankowski’s direct, objective and real-time approach extends current research about the effect of intrusive marketing on internet users. So far, most studies on this topic have been subjective in nature, and have typically analysed only the impact of online advertisements on brand awareness and memory. Other researchers have investigated web users’ visual attention, recorded their behaviour, or relied heavily on subjective information provided in questionnaires.

In Rejer and Jankowski’s experiment five Polish men and one woman, between 20 and 25 years years of age, were instructed to read ten short pages of text on a computer screen, after which they had to answer questions about the content. During the reading process, their attention was distracted when online advertisements randomly appeared on screen. The brain activity of each participant was measured using an electroencephalogram (EEG). The researchers did not only take note of each participant’s brain signal patterns, but also analysed how consistent these were across the different trials, and how they correlated with those of others.

Two main effects were observed for most subjects. First, the presence of online advertisements influenced participants’ concentration. This was deduced from the significant drop in beta activity that was observed in the frontal/prefrontal cortical areas. According to the researchers, this could indicate that the presentation of the advertisement induced a drop in concentration levels.

Secondly, the appearance of the advertisement induced changes in the frontal/prefrontal asymmetry index. However, the direction of this change differed among subjects, in that for some it dipped, and for others it increased.

The researchers believe that the participants’ response to the advertisement might be influenced by their so-called motivation predisposition. “If the subject is more ‘approach’ oriented, the changes in the asymmetry index might reflect growing activity in the left brain hemisphere. If, on the other hand, the subject is more ‘withdraw’ oriented, these changes might reflect the growing activity in the right hemisphere,” explains Rejer, who also notes that this is only a hypothesis that should be tested in future work on the intrusive nature of different forms of online advertisements.

A new route to molecular wires

As conventional silicon-integrated circuits reach their lower size limit, new concepts are required such as molecular electronics — the use of electronic components comprised of molecular building blocks. Shuo-Wang Yang at A*STAR Institute of High Performance Computing together with his colleagues and collaborators, are using computer modeling to design electric wires made of polymer chains.

“It has been a long-standing goal to make conductive molecular wires on traditional semiconductor or insulator substrates to satisfy the ongoing demand miniaturization in electronic devices,” explains Yang.

Progress has been delayed in identifying molecules that both conduct electricity and bind to substrates. “Structures with functional groups that facilitate strong surface adsorption typically exhibit poor electrical conductivity, because charge carriers tend to localize at these groups,” he adds.

Yang’s team applied density functional theory to a two-step approach for synthesizing linear polymer chains on a silicon surface. “This theory is the best simulation method for uncovering the mechanism behind chemical reactions at atomic and electronic levels. It can be used to predict the reaction pathways to guide researchers,” says Yang.

The first step is the self-assembled growth of single monomers on to the silicon surface. Yang’s team studied several potential monomers including, most recently, a thiophene substituted alkene and a symmetrical benzene ring with three alkynes attached. The second step is the polymerization of the tethered monomers by adding a radical to the system.

According to the calculations, these tethered polymers are semiconductors in their natural state. “We introduced some holes, such as atomic defects, to the wires to shift the Fermi levels and make them conductive,” Yang explains.

The team then studied the electron band structures of each component before and after tethering and polymerization; finding little charge transfer between the molecular wires and the silicon surfaces. “The surface-grafted polymers and underlying substrates seem independent of each other, which is an ideal model of a conductive molecular wire on a traditional semiconductor substrate,” says Yang.

“Our finding provides a theoretical guide to fabricating ideal molecular wires on traditional semiconducting surfaces,” he adds. The team is plans to extend their work to study 2D analogs of these 1D polymer chains that could work as a metallic layer in molecular electronic devices.

The A*STAR-affiliated researchers contributing to this research are from the Institute of High Performance Computing and Institute of Materials Research and Engineering.