Remarks Archives - Lord Clement-Jones | Speaker AI and Creative Industries https://www.lordclementjones.org/tag/remarks/ Speaker AI and Creative Industries | UK, China, Middle East | Lord Clement-Jones Sat, 22 Sep 2018 17:01:08 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.lordclementjones.org/wp-content/uploads/2018/09/cropped-lcj-icon-32x32.png Remarks Archives - Lord Clement-Jones | Speaker AI and Creative Industries https://www.lordclementjones.org/tag/remarks/ 32 32 Lord C-J launches Select Committee report on AI :”We Need an Ethical Framework” https://www.lordclementjones.org/2018/05/29/lord-c-j-launches-select-committee-report-on-ai-we-need-an-ethical-framework/?utm_source=rss&utm_medium=rss&utm_campaign=lord-c-j-launches-select-committee-report-on-ai-we-need-an-ethical-framework https://www.lordclementjones.org/2018/05/29/lord-c-j-launches-select-committee-report-on-ai-we-need-an-ethical-framework/#respond Tue, 29 May 2018 20:10:41 +0000 https://www.lordclementjones.org/?p=1909 The post Lord C-J launches Select Committee report on AI :”We Need an Ethical Framework” appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>

Recently I helped to launch the report of the Select Committee Report on AI which I chaired. This is a piece I recently wrote about the Report and its implications.

Barely a day goes by without a piece in the media on a new aspect of AI or Robotics, including in today’s Gulf Today I see. Some pessimistic others optimistic.

Elon Musk Tesla and SpaceX boss has called AI more dangerous than nuclear weapons.

The late Professor Stephen Hawking has put the future rather dramatically: “the development of full artificial future intelligence could spell the end of the human race” and again “the rise of powerful AI could either be the best or the worst thing ever to happen to humanity.”

Others such as Dr Nathan Myhrvold former CTO of Microsoft. Have a more optimistic  view about the future. The market will solve everything.

The CEO of Google, Sundar Pichai,  says AI is more profound than electricity or fire.

We need to recognize that understanding the implications of AI  here and now is important : Amazon’s Echo and Echo Dot, Google Home and a variety of other devices Siri on Apple devices for example, are already in one in ten homes in the USA and UK.

This is the context for the Report of our House of Lords AI Select Committee which came after nine months of inquiry, consideration of hundreds of written submissions of evidence, hours of fascinating oral testimony, one session being trained to build our own neural networks and a fair few lively meetings deciding amongst ourselves what to make of it all.

In our conclusions we are certainly not of the school of Elon Musk. On the other hand we are not blind optimists. We are fully aware of the risks that the widespread use of AI could raise, but our evidence led us to believe that these risks are avoidable, or can be mitigated to reduce their impact.

But we need to recognize that understanding the implications of AI  here and now is important : Amazon’s Echo and Echo Dot, Google Home and a variety of other devices Siri on Apple devices for example, are already in one in ten homes in the USA and UK. As a result of the Cambridge Analytica saga consumers and citizens are far more conscious of the uses to which their data is put, both through AI and otherwise than just a few months ago.

The Report of our AI Select Committee  came after nine months of inquiry, consideration of 225 written submissions of evidence, and 22 oral sessions.

Our task was “to consider the economic, ethical and social implications of advances in artificial intelligence”.From the outset of the inquiry, we asked ourselves, and our witnesses, five key questions:

  • How does AI affect people in their everyday lives, and how is this likely to change?
  • What are the potential opportunities presented by artificial intelligence for the United Kingdom?  How can these be realised?
  • What are the possible risks and implications of artificial intelligence? How can these be avoided?
  • How should the public be engaged with in a responsible manner about AI?
  • What are the ethical issues presented by the development and use of artificial intelligence

“For AI to continue to be a success, we need to work together.”

— Lord Clement-Jones

As it is 181 pages long with 74 recommendations you’ll be pleased I won’t be going into detail but the report is intended to be practical and to build upon much of the excellent work being done already in the UK.

Our recommendations are intended to be practical and to build upon much of the excellent work being done already in the UK and they revolve around five central threads which run through the report.

The first is that the UK is an excellent place to develop AI, and people are willing to use the technology in their businesses and personal lives.The question we asked was, how do you ensure that we stay as one of the best places in the world to develop and use AI?

There is no silver bullet. But we have identified a range of sensible steps that will keep the UK on the front foot.

These include making data more accessible to smaller businesses, and asking the Government to establish a growth fund for SMEs to scale up their businesses domestically and not worry about having to find investment from overseas or prematurely sell to a tech major. The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK. Their recent AI Sector deal is a good start but only a start. Real ambition is needed.

A second thread relates to diversity and inclusion.

In Education and skills

  • In Digital Understanding
  • In Job opportunities
  • In Design of AI and Algorithms
  • In the Datasets used

In particular the prejudices of the past must not be unwittingly built into automated systems. We say that the Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.

A third thread relates to equipping people for the future.  Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI.  Retraining will become a lifelong necessity.

At earlier stages of education, children need to be adequately prepared for working with, and using, AI, data understanding is crucial.

A fourth thread is that individuals need to be able to have greater personal control over their data, and the way in which it is used. We need to get the balance right between maximising the insights which data can provide to improve services and ensuring that privacy is protected.

The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency.

This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability hubs of all things and data trusts.

AI has the potential to be truly disruptive to business and to the delivery of public services.  For example AI could completely transform our healthcare both administratively and clinically if NHS data is labelled, harnessed and curated in the right way. But it must be done in a way which builds public confidence . That these new frameworks and mechanisms are important

Transparency in AI is needed. We recommended that industry, through the new AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.

Of particular importance to the committee was the need to avoid data monopolies, particularly by the tech majors. Access to large quantities of data is one of the factors fueling the current AI boom.  We have heard considerable evidence that the ways in which data is gathered and accessed needs to change, so that innovative companies, big and small, as well as academia, have fair and reasonable access to data

Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. In our report we call on the Government, with the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by big technology companies operating in the UK. It is vital that SME’s have access go datasets so they are free to develop AI.

The fifth and unifying thread is that an ethical approach is fundamental to making the development and use of AI a success for the UK. The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment.

A great deal of lip service is being paid to the ethical development of AI but the time has come for action and not just paying lip service to the idea. We’ve suggested five principles that could form the basis of a cross-sector AI Code.

  • Artificial intelligence should be developed for the common good and benefit of humanity.
  • Artificial intelligence should operate on principles of intelligibility and fairness.
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence

These are just to get the ball rolling, and not just amongst academics, or between businesses, or between Governments. They must be agreed and shared widely, and work for everyone. Without this, an agreed ethical approach will never be given a chance to get off of the ground.

We did not suggest any new regulatory body for AI, taking the view that ensuring that ethical behavior takes place should be the role of existing regulators, whether FCA, CMA, ICO, OFCOM. We also believe that in the private sector there is a strong potential role for ethics advisory boards.

 AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach will ensure the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.

All this adds up to a package which we believe will ensure that the UK could remain competitive in this space

AI policy is in its infancy in the UK.The Government has made a good start in policy making and our report is intended to be collaborative in its spirit and help develop that policy to ensure it is comprehensive and coordinated.

In our  Report we asked whether the UK is ready, willing and able to take advantage of AI. With our recommendations, it will be.

The omens from Government are good. What we need from now onwards is making sure that our recommendations are adopted. Where you agree with them we welcome support in taking them forwards with industry, academia and the Government. For AI to continue to be a success, we need to work together.

The post Lord C-J launches Select Committee report on AI :”We Need an Ethical Framework” appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
https://www.lordclementjones.org/2018/05/29/lord-c-j-launches-select-committee-report-on-ai-we-need-an-ethical-framework/feed/ 0
Lord C-J calls for Ethical framework for AI applications https://www.lordclementjones.org/2017/06/17/lord-c-j-calls-for-ethical-framework-for-ai-applications/?utm_source=rss&utm_medium=rss&utm_campaign=lord-c-j-calls-for-ethical-framework-for-ai-applications https://www.lordclementjones.org/2017/06/17/lord-c-j-calls-for-ethical-framework-for-ai-applications/#respond Sat, 17 Jun 2017 17:19:16 +0000 https://www.lordclementjones.org/?p=1849 The post Lord C-J calls for Ethical framework for AI applications appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>

As Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence I recently gave a Speech at the Berlin AI Expo on why business needs to develop an ethical framework for the use of AI and algorithmns. This is what I said. 

Society as a whole is becoming more and more conscious of the impact of AI.  I am still not sure however how fully conscious we are of exactly how fast and impactful this will be.

But after a decade of fiction it is now becoming fact. At the Digital Innovators’ Summit Berlin Summit recently Tobias Hellwig, Editorial Developer at SPIEGEL Tech Lab apparently asked his audience “Do you remember the talking car?” instantly reminding them that long before Alexa and Siri, characters such as Knight Rider’s KITT and Space Odyssey’s HAL were the heroes of artificial intelligence.

But here and now Amazon’s Echo and Echo Dot devices have already generated 34,000 five-star reviews among consumers who are engaging on average 16 times a day.  AI, particularly in the form of chatbots, is one of the big areas of tech to watch. Developers have written 34,000 chatbots for Facebook’s Messenger in the last 6 months. They have have taken off so much that apparently 25% of the users of the Microsoft Xiaoice chatbot in China and Japan have told “her” that they love her.

A good example of current and future impact is professionals among whom I number myself. As Professor Richard Susskind notes, we have seen some impact already on the professions: Lawyers are using it for due diligence, Architects use CAD, engineers and accountants even more so. Even the Clergy. There is an app for confessions! But so far none of this has changed the professional advisory model radically.

Far more radical changes are on the way. It is likely the machines will be able do almost all routine professional work. Processing of data with the necessary algorithms will give rise to alternative ways of delivering practical professional expertise.

As Susskind says, the professions risk becoming as outdated as the old liveries and crafts. Fletchers -Arrow makers- or Coopers -Barrel Makers- for instance.

In healthcare the recent Royal Society Report on Machine Learning states that healthcare is where the biggest impact will be.

A  key factor here is the potential hollowing out of professional skills. How are young professionals and other experts going to get the necessary experience in middle career when it’s going to be AI that’s going to do so much of the work.

Others such as Greg Ip, Chief Economics Commentator of  the Wall Street Journal, argue that any pessimism is misplaced.

But on any basis there is a huge societal moral dimension here. What are we content to allow AI to do in substitution for humans? What kind of judgements can we allow them to make. For instance turning off life support systems in a hospital?

It may be that in many circumstances a patient or client will want someone who can share the human condition -for instance a carer, marriage counsellor, an undertaker or a receptionist. Even when making a will AI by itself may not be good enough.

Professor Margaret Boden expresses the overarching question very simply for us as “even if AI can do something, should it?”

Many of these questions almost amount to the questions raised by Dr Yuval Noah Harari in his recent book  “Homo Deus”. What kind of human should we be turning ourselves into? How can we protect ourselves from our destructive nature?

AI systems learn from (“interrogate” perhaps) the data which they are presented with which inevitably will reflect patterns of human behaviour. If we simply reflect human values when instilling values in AI aren’t we storing up trouble for ourselves, given humans’ ability to go off the rails?

We have the the Tay example easily to hand, where an AI chatbot from Microsoft actually machine learnt all the wrong kind of behaviour, in this case adopting racist and sexist language and attitudes from online conversations and had to be shut down within a week in March last year.

Will we inflict violent behaviour on military robots?

Shouldn’t we be thinking about values in a rather different way?

A further dimension is the question of what kind of intrusion and monitoring of individuals in the employment context is acceptable. What values should companies be adopting towards this?

Then we have the issue of what skills will be required in the future. The Royal Society makes a strong case for cross disciplinary skills. Other skills include cross cultural competency, novel and adaptive think and social intelligence.  We need new active programmes to develop these skills Young people need to have much better information at the start of their working lives about the growth prospects for different sectors to be able to make career choices.

So we are going to need creative skills, data using skills, innovation skills, but we may well not need quite so much in the way of analytical skills in the future, because that will be done for us.

The jobs of the future have been described by the Chairman of IBM Ginni Rometty, as not about white collar vs. blue collar jobs, but about the “new collar” jobs that employers in many industries demand, but which remain largely unfilled.

She says: “We are hiring because the nature of work is evolving – and that is also why so many of these jobs remain hard to fill. As industries from manufacturing to agriculture are reshaped by data science and cloud computing, jobs are being created that demand new skills – which in turn requires new approaches to education, training and recruiting.

She added: “And the surprising thing is that not all these positions require advanced education……….. What matters most is that these employees – with jobs such as cloud computing technicians and services delivery specialists – have relevant skills, often obtained through vocational training.”

Brynjolfson and McAfee in their book the Second Machine Age develop the skills discussion further but crucially add that end of the day we have to decide which values to adopt in the face of technological change.

So we come back again to the moral and societal dimension. How do we ensure that the benefits of AI are evenly distributed?  That the productivity gains of AI benefit us all and not simply major corporations. Will the dividend be shared?  Does the possibility that the distribution of jobs themselves will be so uneven mean that we need to contemplate a Universal Basic Income?

Ryan Avent in his perceptive book The Wealth of Humans concludes “Faced with this great, powerful, transformative force, we shouldn’t be frightened. We should be generous. We should be as generous as we can be.”

The Royal Society in their recent Machine Learning Report talk about the ‘democratisation of AI’ in this context’ and this brings great responsibilities for employers in terms understanding the disruption to their workforce and undertaking retraining. The Open Source Initiative is one already available response to this.

It also means that there must be standards of accountability. The potential for bias in algorithms for instance is a great concern. How do we know in the future when a mortgage, a grant or an insurance policy is refused that there is no bias in the system?

The CEO of Microsoft himself, Satya Nadella, has urged creators to take accountability for the algorithms they create in view of the possible unintended consequences which only they could solve.

It is vital that throughout we treat AI as a tool not as a technology that controls us. With software that has been described as “learned not crafted” it will be increasingly important for us to know that machine learning in all its forms in particular is not autonomous and has a clear purpose and direction for the benefit of mankind.

How therefore do we take this forward? in the UK we are currently undertaking a major review of our corporate governance processes. This includes suggestions for new rights of approval for shareholders and greater stakeholder engagement beyond the shareholders. But for AI, governance really includes a combination of legal, ethical and behavioral aspects of conduct  that need to be established.

“Society as a whole is becoming more and more conscious of the impact of AI.”

— Lord Clement-Jones

AI has a particular twofold set of challenges identified by the Royal Society, first the way in which machine learning algorithms use data sets on which they are trained, in particular as regards privacy and data use and secondly the properties of the resulting algorithms after they have been trained on data, including as they say safety, reliability, interpretability and responsibility. To this I would add transparency, human involvement in quality control and lack of bias.

Will Hutton and Birgitte Andersen of the Big Innovation Centre, in the context of the challenges of Brexit and the necessary industrial strategy have argued for the creation of much stronger and more purposeful corporate cultures. As they say:

“The opportunities and challenges of digitisation, of artificial intelligence with all the ethical issues cross-cutting the enormous possibilities, of the internet of things etc. are best exploited by companies with a strong sense of purpose”

This chimes in with many of the Thematic Pillars adopted by the Partnership in AI, founded by companies such as Apple, Amazon, Facebook, Google/ DeepMind, IBM, Microsoft and which now includes a rapidly growing list of non-Silicon Valley companies.

In this context I believe the time has come  in businesses with a strong AI and algorithm component to consider setting up AI ethics advisory boards  to ensure that algorithms are free of bias when making decisions, for instance on credit ratings, mortgages or insurance, especially if that rather chilling concept the all encompassing “Master Algorithm” comes to fruition.

Such ethics advisory boards will also need to draw lines in terms of what they think is appropriate to be done by AI within a business, because change can be as rapid or as infinite as we want and the impact can be as assistive to or in substitution to human employment and skills as desired.

The Royal Society however argue for a sectoral approach to governance on the grounds that issues can be very “context specific” and the regulators should be specific to the sector.  

I believe that something of a voluntary nature more akin to a common governance framework is desirable and can be constructed without amounting to a one size fits all solution. In research we already have this in the proposal for the Responsible Research and Innovation Framework. We need the corporate/commercial equivalent.

Nevertheless, however voluntary the governance aspects (perhaps on an increasingly  commonplace “comply or explain basis”) there will the necessity of legislating to establish legal liability where AI carries  out its tasks incompetently, inadequately or with bias and thereby causes damage. We will need to determine to what extent are corporate bodies or individual actors liable?

What status will robots have in law? Last year, as reported by Future Advocacy in their Report “An Intelligent Future?” the European Parliament released a proposal suggesting that robots be classed as ‘electronic persons’ in response to the increasing abilities and uses of robotics and AI systems.

Added to this, and perhaps the greatest priority of all, is the need to ensure public understanding and acceptance of AI. This is not simply guaranteed by increasing prevalence of AI and algorithm based functions, which now appear in everyday form from search engines to online recommender systems, voice recognition translation and fraud detection.

In fact public awareness of AI and machine learning is very low, even if what it delivers is well recognized. It is clear that when there is awareness there are number of concerns expressed such as the fear they could cause harm, replace people and skew the choices available.

So public engagement is crucial to build trust in AI and machine learning. This in turn means ensuring that algorithms are interpretable and transparent. This brings us straight back to the governance area.

The potential of artificial intelligence (AI) to revolutionise our  landscape is almost infinite – but there is a huge amount of work to be done before ethical and societal issues are ironed out. Professor Stephen Hawking has put the future rather dramatically: “the development of full artificial future intelligence could spell the end of the human race” and again “the rise of powerful AI could either be the best or the worst thing ever to happen to humanity.”

I wouldn’t be so pessimistic but we must absolutely build on the best of our human virtues and create a virtuous circle of trust and communication aligned with ethical behaviour and transparency of algorithm construction. Governments, business and academia in close partnership with the public should start work on this immediately.

The post Lord C-J calls for Ethical framework for AI applications appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
https://www.lordclementjones.org/2017/06/17/lord-c-j-calls-for-ethical-framework-for-ai-applications/feed/ 0
Lord C-J on Startups: Financing getting better but major skills gaps https://www.lordclementjones.org/2014/12/11/lord-c-j-on-startups-financing-getting-better-but-major-skills-gaps/?utm_source=rss&utm_medium=rss&utm_campaign=lord-c-j-on-startups-financing-getting-better-but-major-skills-gaps Thu, 11 Dec 2014 22:54:08 +0000 https://www.lordclementjones.org/?p=1117 The post Lord C-J on Startups: Financing getting better but major skills gaps appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>

Here is what I said in the Queen’s Speech Debate about Start Ups in  the Creative and Tech Industries and how we can partner with Chinese creative industries.

In many respects the Queen’s Speech is to be welcomed, precisely for the fact that it does not contain a huge amount of new legislation. None the less, I welcome the carryover of the Consumer Rights Bill and the Deregulation Bill. Curiously, I note for the aspiring statesmen among us that it will, among other things, make statues easier to erect. I do not know whether your Lordships noticed that.

In particular, I welcome the introduction of the Small Business and Enterprise Bill outlined by my noble friend Lord Livingston because today, among other issues, I want to deal with the key question of start-ups in the tech and creative sector and what we need to do to ensure their success. I enjoyed the digital speech of the noble Lord, Lord Mitchell. According to official figures, the creative sector grew by almost 10% in 2012 and outperformed all other sectors of UK industry. It accounted for 1.6 million jobs in 2012. However, this underestimates the contribution of the creative industries. Many would say that they employ at least 2 million, so I welcome the Government’s intention to reclassify their contribution to GDP to bring back in software. We should not forget, too, the key role that arts and culture perform in providing the creative industries with talent. The approach to their funding needs reappraising, particularly in the light of the CEBR report on their contribution to the economy.

A crucial factor in one area of growth has been the tax treatment of film production, followed by high-end television and animation. In April, video games relief was cleared by the EU, which was excellent news. The new theatre production tax relief and patent box will have a major impact too. From a recent presentation at the Google Campus in Tech City to the Communications Committee, it is clear that we have come a long way since Silicon Roundabout morphed into Tech City. The noble Lord, Lord Mitchell, gave some very interesting figures. There have been more than 15,000 start-ups there in each of the past two years.

“We should not forget, too, the key role that arts and culture perform in providing the creative industries with talent.”

— Lord Clement-Jones

More widely, Trip Hawkins, who founded Electronic Arts in 1982, recently said that Britain is the most creative country in the world and can lure top technology businesses away from Silicon Valley. However, to fulfil that promise, we need to ensure they have access to the skills and finance they need to grow. It seems they now have good access to early-stage finance with a variety of angel investors through the Government’s enterprise investment schemes. Indeed, these schemes in the UK are now said to be among the best in the world. Tech companies have also benefited from the business growth fund, enterprise capital funds and the enterprise finance guarantee. Crowdfunding is beginning to have a real impact. Exceptionally among the banks, Santander has introduced its imaginative breakthrough programme for fast-growing start-ups. The important thing now, as the Creative Industries Council has identified, is promotion of these schemes.

However, we were told that it is in the later stages, where hundreds of million of pounds are required for investment or a venture capital exit is needed, where we are behind the US. Are UK financial institutions too risk-averse? If so, there is a danger of business moving to the US at this funding stage. None the less, US institutions are now moving here which understand the potential of tech and creative start-ups. There is also good evidence from recent listings on the Stock Exchange that we are making progress. The creation of the new High Growth Segment to encourage companies to list here is having an impact.

The talent available, however, is far below what we need. Start-ups in Tech City need a mixture of technical and creative skills to develop their new digital services. Knowledge of digital technologies is particularly crucial. We need 1 million tech jobs to be filled by 2020 to keep up with demand. So I welcome the inclusion in the curriculum of coding, or computer science, from this September for five to 16 year-olds. But even if the pipeline from schools and universities is there, finding the right talent can be tough. Training and proper apprenticeships are hugely important.

Even if we fill the gap in the long term, we will in the short term still be reliant on overseas undergraduates and postgraduates. I welcome the developments with the exceptional talent visas, which show some increased flexibility, but as Policy Exchange’s recently published Technology Manifesto makes clear, we must ensure our visa regime is fast and user-friendly, to attract them into both employment and our higher education institutions.

If we get it right the prize is very great. Policy Exchange says that the internet economy will be 16% of GDP by 2016. We are already the highest net exporter of computer and information services among the G7 countries. Already our online retail surplus is larger than that of Germany and the US combined. This also means that we need to break down the barriers to e-commerce across the EU to create a genuine European digital single market.

Clusters, or hubs, are of huge importance to the tech and creative industries and, as Policy Exchange says, there are many more than just those in London. In terms of innovation, creativity, finance, promotion and skills, clustering is now the name of the game. But this raises the whole question of whether our cities operate on the right scale, especially when compared with cities in emerging markets, and whether they have the necessary powers and control over their own finances. After all, more than 90% of tax is collected by central government.

I was at the opening of the International Festival for Business in Liverpool yesterday, and a great showcase it was for both Britain and Liverpool. We had contributions that demonstrated real commitment to the creative sector from the Prime Minister, my noble friend the Trade Minister and the Culture Secretary. I was also delighted that my noble friend acknowledged the value of professional services, but it reminded me of the words of the noble Lord, Lord Heseltine, in the paper No Stone Unturned: In Pursuit of Growth, which he wrote in 2012 about cities and regeneration. He said:

“What Liverpool forced me to confront was the extent to which these conditioning qualities had been driven from municipal England. The dynamism that had built the city was gone”.

There is a major RSA project under way, the City Growth Commission, chaired by Jim O’Neill, which will report in October this year. As the commission says, too many of the UK’s urban areas outside London are failing to achieve their growth potential. How can we make our cities competitive in the global economy? How can we strengthen our clusters? How can we tie in our universities as incubators? It is vital that we build on initiatives such as the LEPs, city deals and the regional growth fund.

As the Lords Select Committee recently concluded, our creative industries are increasingly part of Britain’s soft power. They are a vital aspect of our international trade and investment. A few days ago I took part in the third Technology Innovators Forum—TIF-IN—in Qingdao, opened by the Secretary of State for Business, which reflects that with the growth of digital platforms and applications there is a symbiotic relationship between the tech sector and creative content, as well as the need to promote our UK creative industries in emerging markets, especially China.

At TIF-IN, my right honourable friend launched the Global Digital Media and Entertainment Alliance with China, which will promote long-term relationships in the digital media and entertainment sectors. I am optimistic that it will greatly benefit the UK’s creative industries. We have a terrific team of UKTI people in China, with increasing sector specialisms. They have done an excellent job post-Olympics in developing Britain through the GREAT campaign. The FCO is very supportive. We now have an expert IP attaché in Beijing and other major markets.

However, UK companies, particularly SMEs, need persuading to be bolder. We need to demonstrate the benefits of trade and investment with emerging markets more effectively, and we need a much longer pipeline of SMEs lining up to do business in emerging markets. That means more UKTI resource in the UK, especially in the English regions.

Finally, as I cannot take part in Thursday’s debate, I want to mention the tourism sector. In that context I very much welcome the announcement last week at the British Hospitality Association summit of the creation of a new tourism council along the lines of the successful model of the Creative Industries Council. I hope that this will lead rapidly to a range of measures that will ensure the competitiveness of our tourism industry compared to other European destinations. That is also a vital part of our soft power.

The post Lord C-J on Startups: Financing getting better but major skills gaps appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>