Name:
Privacy: global perspectives
Description:
Privacy: global perspectives
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/b49ba958-8001-4798-9a76-2d437fd10186/videoscrubberimages/Scrubber_1.jpg?sv=2019-02-02&sr=c&sig=sihM2vDLq8XW2l%2BZv6yEnHlgnzWNCAdz9MOxi21snQQ%3D&st=2024-10-16T00%3A42%3A36Z&se=2024-10-16T04%3A47%3A36Z&sp=r
Duration:
T00H41M11S
Embed URL:
https://stream.cadmore.media/player/b49ba958-8001-4798-9a76-2d437fd10186
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/b49ba958-8001-4798-9a76-2d437fd10186/Privacy global perspectives-HD 1080p.mov?sv=2019-02-02&sr=c&sig=ymUJCr7KRl82HMrzdsyBzZAxOKnosRyhZueVpuAZAyQ%3D&st=2024-10-16T00%3A42%3A36Z&se=2024-10-16T02%3A47%3A36Z&sp=r
Upload Date:
2021-08-23T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
[MUSIC PLAYING]
CHRIS CHAN: Hello and welcome. My name is Chris Chan, and it's a pleasure to be serving as the moderator of this session entitled Privacy-- Global Perspectives. We're fortunate today to have three geographically diverse speakers to provide us with their insights on this topic. And the trouble that we had finding a time slot for this recording session, I think, is testament to the truly global nature of the group.
CHRIS CHAN: And I do want to express my sincere thanks to the contributors for putting up with some really unsociable hours as we went through the planning and recording for this session. So our three speakers will each present in turn for about 10 minutes. That will conclude the recorded part of the session. And we'll then break out into groups for discussion. So let's get started.
CHRIS CHAN: Our first speaker, Andrew Cormack, has worked for Jisc, the UK'S national research and education network, for more than 20 years. As chief regulatory advisor, his focus is on policy and regulatory issues. His role has expanded from networks to technical services, such as federated authentication and more recently services based on data. Throughout, his aim has been to make law and technology work together rather than in opposition.
CHRIS CHAN: His presentation today is entitled Thinking with GDPR. And without further ado, I pass the time to Andrew.
ANDREW CORMACK: Thank you, Chris. That looks promising, OK. One thing it seems everyone knows about Europe is that we have a strong privacy law-- the General Data Protection Regulation or GDPR. In this talk, I'd like you to get to viewing that not just as a law but as a really useful way to think about designing systems and processes and maybe challenge a few mates along the way.
ANDREW CORMACK: Here's what the GDPR itself says its about. You'll hear a lot about the rules relating to the protection of natural persons-- some of it inaccurate. So I'm not going to talk much about that. What I'd like to focus on is the much less referenced rules relating to the free movement of personal data. GDPR is explicitly in its very first article about helping the movement and use of personal data provided that's done in a way that's safe for individuals.
ANDREW CORMACK: So my first myth-- GDPR isn't primarily about individuals. It's about the organizations that handle their data. All the GDPR principles are aimed at them. And those principles are a really useful guide to designing safe products, services, and other activities. For example, accountability requires not only that organizations are compliant but that they can show they're compliant.
ANDREW CORMACK: So we must think before we start to use personal data about the design of our systems and processes, safeguards against error and misuse, how we will operate them safely, and how we will ensure those plans actually happen. The key point is that the focus here must be on the individuals and groups whose data we process, not on the organization. And the GDPR provides a tool-- the Data Protection Impact Assessment, DPIA-- to guide that thinking.
ANDREW CORMACK: DPIA is mandatory for large-scale and otherwise high-risk processing. But they're a really useful tool for thinking about smaller activities too. And once you've done a DPIA, why not publish it? Show your users and other stakeholders that you are taking care of their interests. Another principle both of law and design is purpose limitation.
ANDREW CORMACK: This requires us to think clearly and precisely about why we're collecting and processing, using personal data. Multiple purposes may be OK. But we have to be clear in our own minds and in our documentation what those are. In case it comes in useful isn't a convincing purpose either for regulators or for stakeholders. And having set out purposes, we must avoid creep beyond them.
ANDREW CORMACK: Once you've identified one or more purposes, you need to ensure that your organization has a lawful basis for that purpose. Is it something you need to do in order to fulfill an agreement with the individual? For example, to pay a salary or deliver a service they've requested. Or something you are required to do by law-- telling the tax office about the salary.
ANDREW CORMACK: Or-- and we hope not to be in this situation-- something that's needed to safe a life or prevent serious injury. Or something that's in the public interest and where our organization is best placed to do it. Or something that is in the interest of the organization, of individuals, or third parties it may work with. Each of these has its own conditions that our design must satisfy.
ANDREW CORMACK: In particular for public interest and legitimate interests, we must balance our interests with those of the individuals whose data we pose to process. If it's hard to meet those conditions, then you probably need to rethink either your design or whether you should be doing this at all. Secondly, GDPR isn't about preventing process. It's about allowing processing that's necessary.
ANDREW CORMACK: And necessary has a very specific meaning-- that there's no less intrusive way to achieve the purpose. So it forces us to think-- again, good design practice-- about minimization. How little data does the purpose need? How little processing? How little disclosure both internally and externally? And, how soon can we get rid of it? GDPR and its guidance recognize lots of technologies as contributing to this.
ANDREW CORMACK: Attributes, what someone is-- student, staff, guest-- is often more useful than who anyway. Pseudonyms, which let us recognize a returning user, but not identify them. Statistics, where we can achieve our purpose with counts, averages, and so on. Roles that allow us to define and enforce policies. And federations, which we'll come back to later. Third myth-- GDPR isn't mostly about choice.
ANDREW CORMACK: It's about notice. With very few exceptions, people must be told the natural consequences of the situation they're in or about to enter. Most of what you must tell them is the product of the thinking in the first two stages-- who is processing their data? What processing you're doing, including the legal basis. Why, including the purposes.
ANDREW CORMACK: How long this will continue, and what happens to the data when it's stopped. Who else and where may be involved, and how to exercise their rights over their data. Sometimes, but far less often than these claimed, individuals will actually have a free choice whether or not to give you their data. But remember the five legal bases. If you're offering them a service, or required by law to process the data, or saving a life, or serving a public [INAUDIBLE] interest, then their choice probably isn't free.
ANDREW CORMACK: In those cases, this quote from Guy Singh-Watson is relevant. "Organizations should be doing the right thing, not abdicating responsibility by instead asking customers to choose what that is." Guy isn't a data protection guru, he's a farmer who runs an environmentally responsible vegetable scheme. If he knows what corporate responsibility looks like, shouldn't we try a bit harder?
ANDREW CORMACK: Most often, I'd suggest, true consent will be appropriate when you'd like an individual to volunteer information to get into a deeper relationship with you, not to discover whether they want a relationship at all. If you can't find a basis for that initial relationship among the first five bases, maybe rethink your plans. So, actually, GDPR helps us meet the expectations of our users, customers, and wider stakeholders.
ANDREW CORMACK: We reduce the flow of information, increase the benefits we deliver from what we have, and by doing that publicly, we can provide a basis for increasing confidence and trust. So, how does that work in practice? Let's look first at how students get access to the content they need for their courses. Historically, that was a two-party relationship, where the student had to set up this personal account containing lots of personal data.
ANDREW CORMACK: Most of which didn't actually help the provider either to decide whether the student should have access because it was all self-declared or to deal with problems if they misbehave. Thinking with the GDPR principles and some smart technologies, we realized that inserting the students and institution to this trusted third party produced a very different data flow. The student requests access, the provider checks with the designated institution whether they're covered by the license.
ANDREW CORMACK: The institution then uses the existing relationship and data to strongly authenticate the student or associate them with a license and undertake to deal with any misbehavior. Win-win-win. Or thinking about analytics, institutions do stuff whether teaching, providing support, or providing facilities. Data trails generated by students and staff in using those facilities can be analyzed as a compatible purpose to work out how to improve them-- an obvious legitimate interest with the balancing test, ensuring it's done safely.
ANDREW CORMACK: If additional information from the student would help, we can ask them to provide it, always being aware that they may refuse or lie. And if there's an opportunity for individual improvement as well as system-wide, we can suggest that. Again, the student can refuse to follow the suggestion. Limiting consent to these last two stages means our analytics and improvements can be based on whole cohort data, not self-selected.
ANDREW CORMACK: Students can be reassured that the institution has weighed the risks and benefits to them and that their actions in donating data or acting on personalized suggestions are free and fully informed. If you'd like to know more about this work there are some references the slides will be available afterwards. I'm sorry I cannot be with you. It's 3 o'clock in the morning here.
ANDREW CORMACK: But here's how to get in touch. Thank you.
CHRIS CHAN: Thank you so much, Andrew. Yes, it is very late where you are. So please don't feel bad about not being able to come to the live session. And thank you so much for the presentation. I think that's certainly a lens I hadn't thought of looking at the GDPR through-- as something that helps enable the effective use of information rather than being a roadblock that we have to work around. So thank you.
CHRIS CHAN: Moving swiftly on, next we have Christine Suver, who leads the research governance and ethics group at Sage Bionetworks. The group developed some pilot data sharing models and tools to enable open research collaborations. In her work, she helps research participants and researchers determine the appropriate governance approach to contribute, collect, access, and share research data responsibly.
CHRIS CHAN: Her presentation today is entitled mHealth Wearables and Apps-- A Changing Privacy Landscape. So, Christine, over to you.
CHRISTINE SUVER: Thank you. Let me make sure that you can see my slide.
CHRIS CHAN: It's looking great.
CHRISTINE SUVER: Perfect, thank you. So today, I want to talk to you about digital health. And digital health is a pretty large field. You may be familiar with some aspects of digital health, like telemedicine, for instance. But today, I want to talk to you about one aspect that is the use of consumer wearable and mobile application to collect health data, and it's called mHealth. mHealth is a very growing field. It's estimated that the global number of health app is going to reach about 100 billion US dollar by 2023.
CHRISTINE SUVER: It's a growing field because there is new technology that enable the collection of new type of data that provide a much broader and complete view of someone's health. So wearable and mHealth app can continuously monitor some aspects of your physical health, like how much you exercise. And most people count every step these days. We can also monitor your heart rate, your glucose level, blood pressure.
CHRISTINE SUVER: So it's a very, very rich data collection that occurs pretty much automatically through sensors. And that rich data set can supplement data that is collected occasionally while you go to visit your doctor. So, what privacy rules apply to this mHealth domain? Many countries have enacted some kind of privacy regulation, privacy laws. And Andrew just talked to us about the GDPR, which is one of the most important piece of legislation, as it's applied to 28 different countries in the European Union and three additional ones that are not part of the European Union.
CHRISTINE SUVER: In the US in contrast, there is no comprehensive regulation on data privacy. But data privacy is handled by a specific domain, right? So there is regulation about data on communication, regulation about data on finance, and regulation on health related data. Some states have started to enact some data privacy laws, like the California Consumer Privacy Act and also the New York SHIELD Act.
CHRISTINE SUVER: And some other states are considering different privacy law. But most of the privacy law in the US are not directly controlling how health data needs to be regulated. In the US, the landmark regulation for how to protect health information is HIPAA. So even though different countries have developed their own different privacy regulations-- or in US, different state are considering different privacy regulation-- we find that they are pretty much some global privacy principles that seem to be universally accepted.
CHRISTINE SUVER: And those are the collection, use, and processing of personal data should at minimum be lawful, purposeful, transparent, must be limited in time and scope. It can be controllable, of high quality. The data must be secure and used for their intelligence only. And those privacy principle and the privacy regulation apply to processing, collection, use of personal data.
CHRISTINE SUVER: And there are different type of personal data. Health information is a special category of personal data. It's considered to be more sensitive and require much more protection. And that's why one of the questions that we have is, what about mHealth data? What about this type of health-related information that is collected continuously through sensor? Is that considered medical data and special category of data?
CHRISTINE SUVER: In the US, the answer is that it depends. In the US, app that monitor your health and lifestyle are not regulated under HIPAA unless they are used in the regulated research context or they are used to make health-related decision. It means that the FDA doesn't care about controlling lifestyle apps that track things like your diet, exercise, sleep even if those app or wearable are targeting children.
CHRISTINE SUVER: And yet, those app are increasingly handling a lot of very personal and sensitive information. And all that data collection is very lawful and legitimate based on consent that people provide through accepting term of services or privacy policy. But even when the data collected is collected anonymously, if the data collected involved geolocation, for instance, it can show a pattern of interaction between individual and the device.
CHRISTINE SUVER: So even if individuals are not identified, the information gathered from those app and wearable can be very significant and provide a lot of information. And here, I have an example of a Strava health tracking app that maps people's exercise route. And it was found very quickly that when used by soldier in military bases, the Strava heatmap that is then shared with the world ended up giving the position of those military base in a lot of detail, as you can see.
CHRISTINE SUVER: So app that record your location may pose more risk to privacy. And in time of COVID, you've seen this explosion of contact tracing app. In a study of about 500 COVID-related app [INAUDIBLE] country, [INAUDIBLE] found out that [INAUDIBLE] app are collecting information that are not necessarily needed for contact tracing. Why would this app need to have access to your phone microphone or your phone camera?
CHRISTINE SUVER: So we have to really make a trade-off between protecting privacy and public health interests in one hand, what you are needing in this pandemic. But there are also some situations where, in fact, we don't want anonymity. We don't want the data to be anonymous people because it will not be as useful. We want to share personal and private information. And this is the case for personalized medicine, for instance.
CHRISTINE SUVER: We want to be able to collect information about an individual to provide very tailored medical services. And for that, we want to collect information not only about information in your medical record of your genomic information but we want to know a lot about our environment and lifestyle. So there is really not a one-size-fits-all when you think about privacy.
CHRISTINE SUVER: But yet, one of the principle that is really important is to obtain the data in lawful and fair means and when appropriate to be able to provide consent. The consent is done through term of services and privacy policy most of the time. So, when was the last time you read the term of service? I know that I probably never read a complete term of service. And that's one of the biggest lie on the internet.
CHRISTINE SUVER: We are all scrolling down, click "I Accept," and never read the term of services that would disclose how our data is being used. What about privacy policy? The New York Times Privacy Project looked at a number of privacy policy. And they found that those privacy policies are really difficult to read. They are really complicated.
CHRISTINE SUVER: And looking over the last 20 years, those privacy policy have become much more complex. For instance, the Google Privacy Policy in 1999 was about 600 words. 20 years later in 2019 after GDPR was enacted, the new Google Privacy Policy is close to 4,000 words. And if you think that an adult read on average 300 words a minute, those privacy policy now take about 20 minutes to read.
CHRISTINE SUVER: And no one is spending 20 minutes to read those policy. After GDP was enacted-- I don't know if you remember, but all of the apps were updating their privacy policies. So if you have about 30 app on your phone and you had to read all of these privacy policy that were updated, it would have taken you more than 10 hours. So there is really the need to really improve how people get information about their data and how their data is being handled.
CHRISTINE SUVER: And an interesting concept right now is the concept of a privacy label. And Strava has been one of the pioneer in this area, where they have developed a privacy label that is like a food label. And I just showed a really small example of that, where very quickly you can get more information about the main concept, the main aspect of how the data is being collected.
CHRISTINE SUVER: Apple privacy label was just released as well. So Apple is now asking developer who want to post an app on their marketplace to self-report what kind of information is being collected. And that is another form of privacy label that is interesting. But one part of the topic I'd like to discuss with you-- I'd love to have a discussion with all of you-- is about this notion of continuous consent.
CHRISTINE SUVER: But instead of asking people to read a privacy policy or to agree to term of service at the time when they download an app or use some wearable, why not having a continuous discussion and asking people more often how they think about privacy and how their data can be handled? So I'm looking forward to more discussion with you. Thank you.
CHRIS CHAN: Great. Well, thank you, Christine. I'm a long-time user of the Apple Watch. And your presentation certainly gave me some things to think about. And I'd love to join that discussion too about this idea of maybe we need to continuously ask folks for consent when using their data in this way. OK. We come to the last but certainly not least presentation for this session.
CHRIS CHAN: So we've looked at-- we've touched upon the GDPR. We've touched upon legislation in the US. But, what about China? Well, today, our last presentation, we have Judy Bai, who is an expert in scholarly communications as well as research workflow tools and data in China. She is currently director of business development China for Digital Science, where she's responsible for shaping the company's greater China strategy and delivering its products, services, and thought leadership to customers and partners there.
CHRIS CHAN: Her presentation today is entitled A Look at China's Draft Personal Information Protection Law. Judy, please go ahead.
JUDY BAI: Thanks, Chris, for the nice introduction. Hello, everyone. It's good to be with you. And I'm very glad to have this opportunity to speak to you about the topic of data protection, which is becoming increasingly important given we are living-- spending more and more of our time online, especially during the pandemic. As you can probably already tell from Chris introduction, I'm not a law expert on this particular subject.
JUDY BAI: I have a background in academic publishing with Elsevier and Nature and then joined Digital Science in February, 2018, based in Shanghai. So in case you didn't know, Digital Science is a technology company developing tools to support the research lifecycle. So we're headquartered in London. And the company already has policy in place to comply with the GDPR requirements of the [AUDIO OUT].
JUDY BAI: So although we're only starting to establish presence in China, we think it's important to be aware of the legislation and policy development, to review our existing policies, and also make necessary preparations where needed. So I'll be using the next 10 minutes or so to share with you some of the preliminary information we have gathered on data protection laws in China. And I hope you find it useful to help kickstart conversations within your organizations.
JUDY BAI: So with measures to ensure privacy getting prioritized worldwide, many countries have framed relevant laws and regulations on personal information protection. And most notably, the European legislation GDPR, which you have just heard about from Andrew. So on October 21st, 2020, China released its draft Personal Information Protection Law for a months-long public consultation after the first review by the Standing Committee of the National People's Congress.
JUDY BAI: Actually, before 2020, China has already implemented a series of laws and regulations that cover the protection of personal information. For example, China's Cybersecurity Law, which came into force in 2017, governs the protection of personal information with a focus on the protection of information in the cyberspace, the protection of critical information infrastructure, and the regulation of network operators.
JUDY BAI: And in July, 2020, a draft of the Data Security Law was also released for public comments. So this draft PIPL marks China's first attempt to systematically and legislatively define, establish, and integrate the provisions on the protection and regulation of personal information. So it is generally regarded as a major milestone in China's legislative effort to establish a set of comprehensive regulations around data privacy.
JUDY BAI: Given the interest of time, we'll be focusing on PIPL in our session today, as this draft PIPL is a concise document under 8,000 characters. And it comprises a set of eight chapters with 70 articles. So those familiar with GDPR will find some similarities in the draft PIPL when reading it the first time. Some concepts are inspired by the GDPR.
JUDY BAI: Among other things, the draft PIPL sets out data protection principles, specific rules for the processing of both personal information and sensitive personal information, the rights of individual data subjects, and also penalties for breaches. So here are a few key features of the draft PIPL which are worth highlighted. So extraterritorial application-- in general, PRC laws don't have extraterritorial effect.
JUDY BAI: However, this draft PIPL appears to follow the approach taken by the GDPR and will have a long-arm extraterritorial application to any personal information processing activities of organizations carried out outside the country. It is also worth noting that a non-PRC established organization that is subject to the PIPL due to this application should appoint a representative within the country to deal with data protection-related matters.
JUDY BAI: The second is new legal basis for data processing. Under existing laws and regulations, a data subject's consent has been established as the only legal basis of processing of personal information in China. In the draft PIPL, all the new legal bases are introduced for personal information processing depending on where the processing is necessary. For example, for legal duties, or obligations, or to respond to a public health emergency, or to protect the life, health, and property of a natural person in an emergency.
JUDY BAI: Also, the issue of data localisation and cross-border data transfer has been the subject of much discussion and debate since the Cybersecurity Law came into force in 2017. Under the new draft PIPL, there are generally positive developments providing more alternatives for international companies to manage their cross-border data transfers in a legally compliant manner and to some extent similar to the thinking behind binding corporate rules under the GDPR.
JUDY BAI: And last but not least, hefty fines. So we know that serious legal consequences have been historically absent from Chinese data protection laws. And the draft PIPL takes a different approach. Organizations violating the law could be imposed with fines up to 50 million Chinese yuan. That is equivalent to 7.5 million US dollars or 5% of last year's annual turnover together with business suspension, license revocation, and could also bear civil or criminal liability.
JUDY BAI: So with this information in mind, we're now ready to proceed to answer some questions important to our company. The first of all, will my company be regulated by China's PIPL? One common misunderstanding of the PIPL is that it is only applicable to internet firms, such as these tech giants, Tencent, Baidu, or [INAUDIBLE].
JUDY BAI: But, actually, we learned just now as long as you have a business running in China, you will be regulated by PIPL, as there is always personal information, such as email address and phone numbers that gets collected and processed during business operations and interaction with customers. Even if your company doesn't have a physical existence in China, it may still be regulated if your company processes the personal information of the people in China for the purpose of providing products or services to people in China or analyzing and evaluating the activities of the people in China.
JUDY BAI: So examples include selling products via online shops to Chinese consumers, providing online language-training courses, or using AI-based technology to surveil people in China, such as facial recognition, location tracking, profiling, et cetera. So we've also just talked about under the PIPL entities outside China that collect and analyze data for these purposes will need to appoint a data protection representative or organization within China to manage these matters.
JUDY BAI: So how to assess the potential impact on a company's IT infrastructure and applications. So we need to look at the following questions. So whether the personal information processed by your company can be transferred out of China, so we already know that under PIPL cross-border transfer of personal data to foreign authorities can be achieved, but it will still require prior approval from Chinese regulators.
JUDY BAI: Whether any sensitive personal information is being processed, we know that the GDPR sets out an exhaustive list of special categories of personal data. The PIPL list of sensitive personal information is actually shorter, but it can cover a broader scope of personal information when compared to GDPR depending on how strictly or loosely this definition of sensitive personal information is going to be interpreted.
JUDY BAI: One question is whether any data classification and retention techniques are deployed in your organization. So this will require a company to deploy relevant techniques to classify information and also implement a proper data retention policy to delete relevant information that is no longer needed for the original purpose of collection.
JUDY BAI: And the last one is to examine whether the company is using any mobile apps to communicate with people or deliver service to China. The Chinese Government has launched several campaigns in the past few years on mobile apps to combat illegal personal information collection and processing inappropriate access, permission requests, inconvenient account deregistration process, and non-transparent privacy policy notice.
JUDY BAI: So many apps have been requested for correction or have gotten removed from the mobile application store. Therefore, if your company uses mobile apps to communicate with people or deliver service to clients, you should pay attention in the app development stage and make sure the process of the access permission requests are proper. So, what measures should I take to ensure compliance?
JUDY BAI: I've listed here some common measures that companies could take to protect personal information that meet the compliance requirement imposed by the draft PIPL. So these are divided into two categories-- technical measures and organizational measures. Given the interest of time today, I won't be going through the list one by one. But it's worth bringing these to the attention of your technical team and then maybe also legal colleagues for assessment and planning.
JUDY BAI: So as the country's first comprehensive law in the area of personal information protection, the PIPL strengthens the protection of personal information while taking into account the complexity of economic and social life. So this draft drew intensive media and public interest from legal professionals, academics, and business representatives.
JUDY BAI: Many studies were conducted to compare the draft law with GDPR and other data laws around the world. So the conclusion is that it appears that the gap between PRC regulation and the GDPR of the EU is closing on personal information. So the PRC regulation aims at covering the entire cybersecurity area. And the GDPR appears to still be more comprehensive, especially regarding accountability, distinction between data controller and data processor, et cetera.
JUDY BAI: So given the potentially wide application of the PIPL and the measures necessary for compliance [AUDIO OUT] Chinese law, companies expected to be running business or to be governed by this law should start to monitor developments and also review policies and practices just to prepare for this significant new law.
JUDY BAI: And they should also maybe factor in some relevant costs that might incur for ensuring personal Information protection while planning their budgets for the near future. Although there's no established schedule yet on passing this law, some estimates that it may be finalized around the middle or late 2021 at the earliest. So this is all I wanted to share with you today.
JUDY BAI: And I hope you find it useful. Thank you very much.
CHRIS CHAN: Well, thank you, Judy. I thought it was a very-- I thought it was a great note to finish on too-- drawing the line between the GDPR and China's proposed regulations. I think it's a good illustration of how privacy really is of global concern. So just to round things off, my thanks again to all of the speakers for their thought-provoking presentations.
CHRIS CHAN: This brings the prerecorded part of the session to a close. And we'll now move to our breakout groups, where the fun will really begin. And as we move into our groups, I'd like to invite you to think about how privacy issues have affected your own organization or your own practice. Have today's speakers sparked some ideas to address some of these? Or, are you dealing with issues and problems that haven't been touched upon yet because we can certainly discuss those as well?
CHRIS CHAN: So I'm looking forward to that. And I'll see you shortly Thanks again to our speakers. And goodbye for now. [MUSIC PLAYING]