ISRO Latest news: India has been quietly working on key technology to enable space station | India News – Times of India

India planning its own space station

may appear sudden, but work on a key technology – space docking – without which such a station cannot be made functional has been going on for at least three years now. The project has already got clearance and the department of space has earmarked Rs 10 crore.

Download The Times of India News App for Latest India News.

This content was originally published here.

Cineworld’s ‘revolutionary’ ScreenX technology is coming to Edinburgh – but what does it actually mean? – Edinburgh Evening News

A ‘revolutionary’ new cinema experience is heading for Edinburgh next month – and this is everything you need to know.

When will ScreenX come to Edinburgh?

ScreenX is able to evolve the moviegoing experience by providing a brand-new canvas with immersive visuals and creative storytelling.

Cineworld has announced that a ‘revolutionary’ new cinema technology ScreenX will arrive at Cineworld Edinburgh, in Fountain Park, on Tuesday, July 2.

What is ScreenX?

ScreenX is the world’s first multi-projection cinema technology, expanding the traditional cinema screen to the side auditorium walls, creating a 270-degree viewing experience for the audience.

Cineworld Edinburgh will be the latest cinema in the UK to install the new technology, bringing the total number of ScreenX auditoriums to 13. Cineworld Edinburgh’s ScreenX will be the first of its kind in Scotland.

Cineworldhas announced that a ‘revolutionary’ new cinema technology ScreenX will arrive at Cineworld Edinburgh, in Fountain Park, on Tuesday, July 2.

READ MORE: Auditorium demolished as iconic Leith cinema embarks on new lease of life
What difference does it make to the experience?

By utilising the side walls of the auditorium and transforming them into an extension of the main screen, the story is conveyed more convincingly, resulting in further audience investment.

ScreenX is said to evolve the moviegoing experience by providing a brand-new canvas with immersive visuals and creative storytelling. Moviegoers can go “beyond the frame” of the movie screen by being at the centre of an expanding image for an expansive viewing experience.

Bringing the ScreenX offering to the UK was born out of a partnership between Cineworld and CJ 4DPLEX, who previously worked together to introduce 4DX to UK in 2015. The first ScreenX in UK opened in 2018, capturing the attention of UK moviegoers since.

READ MORE: 16 lost Edinburgh cinemas that are gone but not forgotten
What has been said?

“We are very happy to be able to introduce ScreenX technology to the people of Scotland through Cineworld”, said JongRyul Kim, CEO of CJ 4DPLEX. “ ScreenX offers a brand new type of movie-going experience, where the audience doesn’t just sit back and watch a movie, but really experience it. We are incredibly fortunate to be working with a great partner like Cineworld, as they are true pioneers who believe in advancing cinematic landscape.”

Lindsay Cook, General Manager of Cineworld Edinburgh said; “We are excited to be the first cinema in Scotland to launch ScreenX. We are driven by innovation and following the success of 4DX, ScreenX will offer more ways for our customers to experience film.”

What are the first films available in ScreenX?

Spiderman: Far From Home followed by Annabelle Comes Home

How much will it cost?:

Customers at Cineworld Edinburgh will pay £14.00 (adult ticket price) for ScreenX. Cineworld Unlimited Card customers will be able to enjoy ScreenX at Cineworld Edinburgh for an additional £3.

Tickets for ScreenX will become available at www.cineworld.com/screenx

This content was originally published here.

Huawei looks to Russian technology to replace Google’s Android on its smartphones — RT Business News

Last month, Google and a number of US tech companies were prohibited from dealing with China’s telecommunication major Huawei and other Chinese corporations. The direct order by US President Donald Trump bans American firms from supplying Huawei with spare parts or technology solutions. The step was reportedly implemented amid high security concerns after Washington accused Chinese tech companies of spying on behalf of Beijing.

The Chinese corporation is negotiating a replacement for Android with the Aurora operating system, currently being developed by Moscow-based firm Russian Mobile Platform, Russian news outlet the Bell reports, citing an official familiar with the issue.

Huawei Chairman Guo Ping reportedly discussed the possible deal with the Russian minister of digital development and communications, Konstantin Noskov, ahead of the St. Petersburg International Economic Forum.

“China is already testing devices with the Aurora pre-installed,” the official said.

Moreover, the subject was addressed during an official meeting of Russian President Vladimir Putin with Chinese leader Xi Jinping the day before the business event. The two presidents reportedly discussed both an opportunity of installing the Aurora operating system on Huawei smartphones and localization of some of Huawei’s production facilities in Russia.

Aurora is a mobile operating system that is being developed on the basis of Sailfish OS, designed by Finnish technology company Jolla. In 2014, Russian entrepreneur Grigory Berezkin became a co-owner of Jolla. Since 2016, the Open Mobile Platform company, associated with the businessman, has been developing a Russian version of the system. Last year, a 75-percent share in the Open Mobile Platform was purchased by Russia’s state telecommunications company, Rostelecom.

For more stories on economy & finance visit RT’s business section

This content was originally published here.

Facial recognition tech is arsenic in the water of democracy, says Liberty | Technology | The Guardian

Automated facial recognition poses one of the greatest threats to individual freedom and should be banned from use in public spaces, according to the director of the campaign group Liberty.

Martha Spurrier, a human rights lawyer, said the technology had such fundamental problems that, despite police enthusiasm for the equipment, its use on the streets should not be permitted.

She said: “I don’t think it should ever be used. It is one of, if not the, greatest threats to individual freedom, partly because of the intimacy of the information it takes and hands to the state without your consent, and without even your knowledge, and partly because you don’t know what is done with that information.”

Police in England and Wales have used automated facial recognition (AFR) to scan crowds for suspected criminals in trials in city centres, at music festivals, sports events and elsewhere. The events, from a Remembrance Sunday commemoration at the Cenotaph to the Notting Hill festival and the Six Nations rugby, drew combined crowds in the millions.

San Francisco recently became the first US city to ban police and other agencies from using automated facial recognition, following widespread condemnation of China’s use of the technology to impose control over millions of Uighur Muslims in the western region of Xinjiang.

When deployed in public spaces, automated facial recognition units use a camera to record faces in a crowd. The images are then processed to create a biometric map of each person’s face, based on measurements of the distance between their eyes, nose, mouth and jaw. Each map is then checked against a “watchlist” containing the facial maps of suspected criminals.

Spurrier said: “I think it’s pretty salutary that the world capital of technology has just banned this technology. We should sit up and listen when San Francisco decides that they don’t want this on their streets.

“It goes far above and beyond what we already have, such as CCTV and stop-and-search. It takes us into uncharted invasive state surveillance territory where everyone is under surveillance. By its nature it is a mass surveillance tool.”

How is facial recognition used around the world?

China has embraced automated facial recognition on a dramatic scale. It is the first government known to use the technology for racial profiling with cameras screening hundreds of thousands of residents to identify and control Uighurs, a largely Muslim minority. In cities across the vast country, the technology checks the identities of students at schools gates and universities, keeps citizens in line by naming and shaming jaywalkers, and records the faces of people who use too much loo roll in public toilets.

In Russia, 5,000 cameras in Moscow are fitted with facial recognition software that takes live footage and checks it against police and passport databases for wanted people. Moscow’s NTechLab, which makes face recognition software, is rated as one of the best in the world.

Despite San Francisco’s decision to ban facial recognition technology, many US police forces are turning to the technology to help solve crimes. Rather than the mass surveillance that comes with live AFR, the technology takes an image of a suspect and checks it against those on a police database. Outside law enforcement, Atlanta airport has installed the technology to allow passengers to pass through checkin, security and on to their plane without having to fish out their passport.

In December, it emerged that Taylor Swift used facial recognition to screen fans for stalkers before a gig at the Rose Bowl. Fans were lured into a kiosk to watch rehearsal clips where a facial recognition camera checked their face against a database of hundreds of known Swift stalkers.

Was this helpful?

She said a lack of strong governance and oversight could allow the police to roll out live facial recognition by stealth, without a meaningful debate on whether the public wanted it or not. The technology was developing so fast, she said, that government was failing to keep up.

“There is a real sense of technological determinism that is often pushed by the big corporations, but also by law enforcement and by government, that it’s inevitable we’ll have this, so we should stop talking about why we shouldn’t have it,” she said.

“What San Francisco shows us is that we can have the moral imagination to say, sure, we can do that, but we don’t want it. It’s so important not to assume that security outweighs liberty at every turn.”

Liberty brought a landmark legal case against South Wales police last month challenging their use of the technology. It is supporting the Cardiff resident Ed Bridges, who claimed an invasion of privacy when an AFR unit captured and processed his facial features when he popped out for a sandwich in December 2017, and again at a peaceful protest against the arms trade. A verdict is expected in the coming weeks.

Three UK forces have used AFR in public spaces since 2014: the Metropolitan police, South Wales police and Leicester police. A Cardiff University review of the South Wales police trials, which were backed by £2m from the Home Office, found the system froze and crashed when faced with large crowds, and struggled with bad light and poor quality images. The force’s AFR units flagged up 2,900 possible suspects, but 2,755 were false positives. An upgrade of the software, provided by the Japanese company NEC, led to confirmed matches increasing from 3% to 26%.

The report described other technical problems with the system. Officers identified what they call “lambs”, people on the watchlist who were repeatedly matched to innocent members of the public. At Welsh rugby matches, for example, AFR flagged up one female suspect 10 times. It was wrong on every occasion.

There are also more insidious issues. The technology works better for white men than any other group, meaning women and black and minority ethnic people are more likely to be flagged up in error, and so stopped and asked to identify themselves. Watchlists are suspect too, and reflect the kinds of biases that lead to areas with large black populations being over-policed.

Spurrier said: “You can see a train of injustice where the existing, entrenched prejudice in our society is codified in technology and then played out in the real world in a way that further breaks down trust between the police and communities, and further alienates and isolates those communities.”

While technical flaws can potentially be fixed, Spurrier opposes live facial recognition on a fundamental level. Mass surveillance has a chilling effect that distorts public behaviour, she said, a concern also raised in a May report by the London policing ethics panel. It found that 38% of 16 to 24-year-olds would stay away from events using live facial recognition, with black and Asian people roughly twice as likely to do so than white people.

Spurrier said: “It doesn’t take a great deal of imagination to see how something like facial recognition eats into the fabric of society and distorts relationships that are really human and really essential to a thriving democracy.

“Little by little, across the country, in tiny but very significant ways, people will stop doing things. From a person saying I’m not going to go to that protest, I’m not going to pray at that mosque, or hang out with that person, or walk down that street.

“Once that is happening at scale, what you have is a mechanism of social control. When people lose faith that they can be in public space in that free way, you have put arsenic in the water of democracy and that’s not easy to come back from.”

The Met police said that after the London police ethics panel report the force was awaiting a second independent evaluation of its trials. “Our trial has come to an end so there are no plans to carry out any further deployments at this stage,” a spokesperson said, adding that the Met would then consider if and how to use the technology in the future.

Deputy chief constable Richard Lewis of South Wales police said the force had been cognisant of privacy concerns throughout its trials and understood it must be accountable and subject to “the highest levels of scrutiny”.

“We have sought to be proportionate, transparent and lawful in our use of AFR during the trial period and have worked with many stakeholders to develop our approach and deployments,” he said. “During this period we have made a significant number of arrests and brought numerous criminals to justice.”

This content was originally published here.

‘I’ve paid a huge personal cost:’ Google walkout organizer resigns over alleged retaliation | Technology | The Guardian

A prominent internal organizer against Google’s handling of sexual harassment cases has resigned from the company, alleging she was the target of a campaign of retaliation designed to intimidate and dissuade other employees from speaking out about workplace issues.

Claire Stapleton, a longtime marketing manager at Google and its subsidiary YouTube, said she decided to leave the company after 12 years when it became clear that her trajectory at the company was “effectively over”.

“I made the choice after the heads of my department branded me with a kind of scarlet letter that makes it difficult to do my job or find another one,” she wrote in an email to co-workers announcing her departure on 31 May. “If I stayed, I didn’t just worry that there’d be more public flogging, shunning, and stress, I expected it.”

“The message that was sent [to others] was: ‘You’re going to compromise your career if you make the same choices that Claire made,” she told the Guardian by phone. “It was designed to have a chilling effect on employees who raise issues or speak out.”




Pinterest

Stapleton was one of the core group of Google employees who sprang into action in October 2018 following a report in the New York Times that Google had paid a $90m severance package to the former executive Andy Rubin despite finding credible an allegation that he had forced a female employee to perform oral sex.

The employees organized a “Google Walkout for Change” on 1 November 2018 that drew tens of thousands of participants at Google offices around the globe. Among the group’s demands were an end to forced arbitration in cases of harassment and discrimination, as well as the appointment of an employee representative to the company’s board of directors.

Google executives publicly supported the employee activism, and acceded to one of the demands on arbitration.

But in April, Stapleton and her fellow organizer Meredith Whittaker spoke out in internal letters about what they said was a “culture of retaliation”.

In the letters, Stapleton said that two months after the walkout, she was demoted and “told to go on medical leave” despite not being sick. The demotion was reversed after she hired a lawyer, she said.

In a statement, Google said: “We thank Claire for her work at Google and wish her all the best. To reiterate, we don’t tolerate retaliation,” It added: “Our employee relations team did a thorough investigation of her claims and found no evidence of retaliation. They found that Claire’s management team supported her contributions to our workplace, including awarding her their team Culture Award for her role in the Walkout.”

Stapleton says the backlash intensified after her allegations spread both internally and in the press. Two managers emailed her entire department to rebut the allegation, she said, a move she claims was “unbelievable” and “truly unprecedented” given company norms around speaking about individual personnel issues. “In a way it showed me how powerful the organizing has been because it was truly extreme,” she added.

Stapleton’s departure comes amid considerable turmoil for Google and YouTube, which are facing increased antitrust scrutiny from the US government, criticism over inconsistent and controversial decisions related to content moderation, and growing activism from employees over issues including the company’s treatment of temps, vendors and contractors (TVCs).

Stapleton said that in her view, all these problems were related: “The one very simple thing that connects all these issues is that it requires leadership and real accountability, and that’s not something that we’ve seen in these very challenging, high-stakes times.

“You could connect the way that TVCs are treated all the way up to how an Andy Rubin payment happens,” she said. “These are systemic imbalances.”

Stapleton said that despite her decision to leave the company, she was optimistic about the future of worker organizing at Google.

“I’ve paid a huge personal cost in a way that is not easy to ask anyone else to do,” she said. “There’s a lot of exhaustion and there’s a lot of fear, but I think that speaking up in whatever way people are comfortable with is having an absolutely tremendous impact.”

“It’s not going away,” she added.

Do you work for Google? Do you have concerns about the workplace? Contact the author: julia.wong@theguardian.com or julia.carrie.wong@protonmail.com.

This content was originally published here.

YouTube blocks history teachers uploading archive videos of Hitler | Technology | The Guardian

YouTube has blocked some British history teachers from its service for uploading archive material related to Adolf Hitler, saying they are breaching new guidelines banning the promotion of hate speech.

The video-sharing website announced on Wednesday that it would remove material glorifying the Nazis from its platform in an attempt to stop people being radicalised. In the process however, it also deleted videos uploaded to help educate future generations about the risks of fascism.

Scott Allsop, who owns the longrunning MrAllsopHistory revision website and teaches at an international school in Romania, had his channel featuring hundreds of historical clips on topics ranging from the Norman conquest to the cold war deleted for breaching the rules that ban hate speech.

“It’s absolutely vital that YouTube work to undo the damage caused by their indiscriminate implementation as soon as possible,” said Allsop. “Access to important material is being denied wholesale as many other channels are left branded as promoting hate when they do nothing of the sort.”

While previous generations of history students relied on teachers playing old documentaries recorded on VHS tapes on a classroom television, they now use YouTube to show raw footage of the Nazis and famous speeches by Hitler.

The Google-owned service sent an automated email to Allsop saying his clips channel had been removed for uploading “content that promotes hatred or violence against members of a protected group”. Much of it consisted of clips from old BBC documentaries, which are no longer easily available, in addition to cine film of Hitler’s speech the night he was appointed chancellor and a short compilation of Joseph Goebbels talking about propaganda.

Richard Jones-Nerzic, another British teacher affected by the crackdown, suggested YouTube’s policy did not take into account the extent to which the history syllabus focused on the second world war.

“Modern world study and Hitler in particular have dominated the history curriculum in the UK over the last 25 years,” he said, explaining that he had been censured for uploading clips to his channel from old documentaries about the rise of nazism.

Some of his clips now carry warnings that users may find the material offensive, while others have been removed completely. He said he was appealing against YouTube’s deletion of archive Nazi footage taken from mainstream media outlets, arguing that this was in itself a “form of negationism or even Holocaust denial”.

Allsop had his account reinstated on Thursday after an appeal but said he had been contacted by many other history teachers whose accounts have also been affected by the ban on hate speech. Users who do not swiftly appeal against YouTube’s decisions could find their material removed for good.

Both men said they had sympathy with what the site was trying to achieve and acknowledged that sometimes the archive fascist material they uploaded to YouTube was viewed by the modern-day far right.

“I have for a long time been unhappy with how my films have often been hijacked by neo-fascists through the comments section, but YouTube’s actions are far too indiscriminate,” said Jones-Nerzic.

Allsop suggested the site needed to take educational context into account rather than rely on automated processes: “I fully support YouTube’s increased efforts to curb hate speech, but also feel that silencing the very people who seek to teach about its dangers could be counter-productive to YouTube’s intended goal.”

A YouTube spokesperson said the company used a combination of technology and people to enforce the guidelines, and encouraged individuals to provide context to clips uploaded for educational purposes rather than simply uploading raw material. They said Allsop and Jones-Nerzic’s material had been reinstated after an appeal.

This content was originally published here.

Trump Committing Nuclear ‘Malpractice’ With ‘Absolutely Disastrous’ Technology Transfer to Saudi Arabia, Senator Warns

Senator Chris Murphy argued that President Donald Trump is committing “nuclear nonproliferation malpractice” by transferring nuclear technology to Saudi Arabia against the wishes of Congress, as a bipartisan group of lawmakers aims circumvent a multibillion-dollar arms sale to the kingdom.

“The president’s committing nuclear nonproliferation malpractice,” Murphy, a Democrat from Connecticut, said during a Wednesday morning interview with MSNBC’s Morning Joe. “Because he’s pulled out of the Iran agreement, and by selling to the Saudis nuclear technology, it makes it much more likely that the Iranians are going to restart their nuclear program, because they see that the Saudis have a head start,” the senator warned.

“So, this is absolutely disastrous,” he asserted.

MBS and Donald Trump
President Donald Trump meets with Saudi Arabia’s Crown Prince Mohammed bin Salman at the White House on March 20, 2018 in Washington, D.C.
AFP/MANDEL NGAN

Senate Democrats revealed on Tuesday that the Trump administration had approved the transfer of nuclear technology to Saudi Arabia on at least two occasions after the brutal murder of U.S. resident and Washington Post journalist Jamal Khashoggi at the hands of a Saudi kill squad in Turkey last October. Trump’s cabinet officials took months to inform Congress of the transfers, despite Republican and Democratic lawmakers voicing serious concerns and opposition to such a move.

Although Saudi Arabia claims to want the nuclear technology to build energy reactors to provide power for the kingdom, Crown Prince Mohammed bin Salman, who is widely seen as the kingdom’s de-facto ruler, said in March 2018 that the kingdom would create a nuclear weapon to counter the perceived threat from Iran.

“Saudi Arabia does not want to acquire any nuclear bomb, but without a doubt, if Iran developed a nuclear bomb, we will follow suit as soon as possible,” the crown prince told CBS News in an interview.

Senator Tim Kaine, a Democrat from Virginia, slammed the Trump administration’s decision to transfer the nuclear information to the Saudi regime in a Tuesday press release.

“The alarming realization that the Trump Administration signed off on sharing our nuclear know-how with the Saudi regime after it brutally murdered an American resident adds to a disturbing pattern of behavior,” he said.

Khashoggi was tortured, killed and cut into pieces with a bonesaw after he entered the Saudi consulate in Istanbul at the beginning of last October. Although the kingdom initially denied the killing of the U.S. resident, who was a prominent Saudi dissident who had fled the kingdom, it later admitted that the operation had been carried out after intense international backlash. Intelligence reports strongly linked the crown prince to signing off on the journalist’s murder, but Trump has consistently defended the kingdom. Calling Saudi Arabia a “great ally,” the president has argued that the U.S. needs the kingdom to continue buying weapons and to ensure global oil prices remain low.

Protesters against MBS and Trump
Demonstrators dressed as Saudi Arabia’s Crown Prince Mohammed bin Salman and President Donald Trump pretend to kiss outside the White House in Washington, D.C. on October 19, 2018, demanding justice for journalist Jamal Khashoggi
AFP/JIM WATSON

Leading Republicans and Democrats have strongly disagreed with the president, attempting to block the sale of weapons to the kingdom. Lawmakers have also raised objections to Washington’s continued support for the Saudi-led war in Yemen, which has led to a massive famine, a cholera outbreak and the death of thousands of civilians.

In April, Congress passed a resolution to withdraw U.S. support for the Saudi-led coalition involved in the conflict, but Trump used his second presidential veto to block the move. Now Republicans and Democrats aim to force 22 resolutions to rebuke the Trump administration for circumventing Congress to sell the kingdom weapons by declaring a national emergency.

“We will not stand idly by and allow the president or the secretary of state to further erode congressional review and oversight of arm sales,” New Jersey’s Senator Robert Menendez, the top Democrat on the Senate Foreign Relations Committee, said in a statement this week.

In his Wednesday interview with Morning Joe, Murphy insinuated, as many analysts and lawmakers have in the past, that Trump’s unwavering support for Saudi Arabia is somehow related to his and his family’s personal financial interests.

“It’s an inexplicable move by the administration,” the senator said. “And for many of us, I think we can’t understand it outside the framework of the Trump family’s personal relationship with the Saudis,” he explained.

“There’s a lot of money that historically has flowed from Gulf state individuals to the Trump family fortune, to their real estate empire,” Murphy pointed out, “and that has to be part of the explanation as to why they continue to bend over backwards to try to make the Saudis happy.”

This content was originally published here.

Is technology key to improving global health and education, or just an expensive distraction? | World Economic Forum

After decades of attempting to improve failing health and education systems in developing countries, the situation in many areas is still dire. In some sub-Saharan African countries, children achieve as little as 2.3 to 5 years of learning, despite typically spending 8 years in school. More than five million children still die before their fifth birthday. The old approach isn’t working, which is why it’s tempting to think that technology is the quick fix.

Artificial intelligence for medicine and educational technology (ed-tech) for learning are gaining popularity with both the public and investors. People are envisioning a future where children across the world can be taught through virtual reality and patients in remote areas will be treated by robots. Small-scale examples of success are being seized upon as justification for investing in any shiny new bit of tech, in the hope that it will be the one that makes all the difference.

But it’s dangerous to see the success of new technologies as inevitable. In developing countries, technological hype has driven expensive investments in hardware billed as silver bullet solutions. There have been programmes to give every child a laptop, but teachers have lacked the skills and digital training to use them so the computers end up locked inside drawers. High-end medical equipment has gathered dust in hospitals worldwide. Such investments have not only wasted money in countries on tight budgets, but have also created widespread misunderstanding of how best to invest in technological solutions.

New research that I have been leading has investigated why some quick fix programmes fail, and what it takes to succeed. The interventions that work do two things. They focus not just on hardware, but on the content, data sharing and system-wide connections enabled by digital technology. And they only deploy technology after careful consideration, and when it’s appropriate to tackle a real, identified problem.

In Uganda, the web-based application Mobile VRS has helped increase birth registration rates from 28% to 70% across the country, enabling decision-makers to track health outcomes and improve access to services for these children. It worked because it targeted the root of the problem – children weren’t getting treated and vaccinated because practitioners didn’t know they existed.

In Kenya, the Ministry of Education has rolled out a programme called Tusome, a literacy platform with digital teaching materials and a tablet-based teacher feedback system. Its huge success stems from the fact that it targets two pre-identified areas that needed improvement – access to learning resources and teacher performance.

One approach is to use integrated dashboards that collect data for better decision-making. For example, both the Ghana School Mapping platform and the MoSQUIT mobile platform tracking malaria in India enable smarter resource allocation on a large scale. A community health workers’ programme run by Muso in Mali, which sends doctors to identify and treat patients in their homes in the most remote areas, has achieved the lowest child mortality rates in sub-Saharan Africa. Technology helped to scale up the programme as digital dashboards for monitoring drove productivity up.

Adaptive learning software that tailors learning based on data collected about a child’s performance is also showing huge promise. An example of this is MindSpark in India, which achieved an increase in maths performance of 38% in five months, with estimated costs as low as $2 a year when scaled up to more than 1000 schools. The technology empowers learners, tailoring lessons to their strengths and working on their weaknesses.

A similar concept is behind onebillion, a programme that has just won Elon Musk’s $5 million XPrize for Global Learning, and is being scaled up across Malawi with the Ministry of Education. Evidence has shown that the programme has the potential to close the gender gap in maths – a crucial step in the journey of giving girls better access to education and the job market.

So is tech really the answer? There are many reasons to be optimistic, but only if the potential for poor choices and failure is kept in mind every step of the way. Technology really could be a potential catalyst for massive positive disruption, but – perhaps paradoxically – this will only happen if it is implemented with care and caution

This content was originally published here.

How a Career in Technology Empowered This South African Woman

Soso Luningo grew up in extreme poverty in a village near Port Elizabeth in South Africa’s Eastern Cape. Her home was essentially a small shed with a leaky, corrugated metal roof, no electricity, and just one bed. Most of her family slept on the dirt floor.

But Soso was bright and determined, and had the full support of her parents in her quest to not just to survive her circumstances, but to thrive despite the many challenges she faced.

She focused on her education and won a scholarship to a university in Johannesburg, where she also enrolled in the Cisco Networking Academy. The networking academy, supported by international technology company Cisco, provides students with opportunities to build the skills they need to pursue a career in technology.

At first, people around Soso questioned her interest in a career in technology, which had traditionally been a man’s field.

“How are you going to deal with a computer? You’re a girl,” people told her. “How are you going to fix theses things?”

But Soso persisted. She channeled their doubts into determination and earned a certification as a Network Engineer, beginning her career in technology.

“That’s when the doors started opening,” she said. “That’s when I started to actually make my life better, even my parents’ lives better.”

Her Cisco networking certification enabled Soso to land a job at a new casino that was opening near her village, where she became the first woman in the IT department, then the first female IT supervisor, and ultimately the first female head of the IT department. 

Take Action: Inspire Women to Become Tech Leaders!

From there, Soso’s career continued to blossom and eventually she became a chief systems engineer, advising businesses and government agencies on their network deployments. Along the way, she gained the confidence to challenge both racial and sexist attitudes prevalent in South African culture. Her success has enabled her to build a more modern house for her family next to the shed with the leaky roof in which she grew up.

Today, Soso is passing along her commitment to education and passion for problem-solving to her own daughters and extended family, using education and technology to break the cycle of poverty.

But that’s not the only way Soso gives back to her community. She became an instructor at the Cisco Networking Academy, the same program that empowered her to build a better life for herself and those around her. Soso now has joined Cisco as a corporate social responsibility manager for South Africa, but she continues to mentor Networking Academy students, particularly young women, encouraging them to be bold and confident in pursuit of their dreams.

In 2017, the Cisco Networking Academy enrolled over 300,000 women and aims to increase that figure to 500,000 female students a year by 2020. By providing women with better access to education opportunities and technology training, Cisco aims to empower women, and in doing so, empower the next generation of global problem-solvers. Through the Cisco Networking Academy, Cisco is using education and technology to provide women with greater, more equitable economic opportunities.

This content was originally published here.