Search company, investor...
Explara company logo

Explara

explara.com

Stage

Seed VC - II | Alive

Total Raised

$490K

Last Raised

$490K | 7 yrs ago

About Explara

Explara, previously known as Ayojak, is a platform that offers a suite of technology solutions and services for online event ticket sell, conference registrations, payment processing,event promotion and event logistics. Having more than 10,000 suppliers on board, Explara has access to inventory for various events and activities in multiple cities.

Headquarters Location

#18, 2nd Floor, 80ft. Road Koramangala 4th Block

Bengaluru, 560 034,

India

+91 969 9611 666

Missing: Explara's Product Demo & Case Studies

Promote your product offering to tech buyers.

Reach 1000s of buyers who use CB Insights to identify vendors, demo products, and make purchasing decisions.

Missing: Explara's Product & Differentiators

Don’t let your products get skipped. Buyers use our vendor rankings to shortlist companies and drive requests for proposals (RFPs).

Latest Explara News

#NAMA: How proactive monitoring & removals would impact intermediaries? Platforms, online harms, copyright, classification of intermediaries

Jan 16, 2020

If proactive monitoring and takedowns is going to be a rule, it’s undoubtedly going to increase costs for us, said Santosh Panda, CEO of Explara at MediaNama’s discussion on Intermediary Liability: The Way Forward in Bangalore in November 2019. “It will also reduce adoption since the onboarding process will become complex, since we wouldn’t able to onboard users with just a mobile number, and would have to ask for PAN or Aadhaar card, etc,” he said. “At the end of the day, we want to ensure that the user is real, has legal representation, and is eligible to host an event,” he added. According to Ravi Sethia, co-founder of Udhaar, there should be a blanket rule for everyone. He laid out hypothetical challenges he could face if proactive monitoring and takedowns was implemented: “Being a mobile app for taking loans, one of the things that would happen is someone could have submitted documents with a pornographic image, and suppose we pass it on to the lender. Does that also cause some kind of issue, are we liable for scrutinising the content we get?” “If someone is recharging their number on our platform or transferring money from one user to another, and there is a message with terrorist content sent alongwith the money transfer, could that be misuse for terrorist activity? Do we become liable?” Note that quotes are not verbatim and have been edited for clarity. Is this requirement vulnerable to court challenge? Besides, a substantive new obligation should not be coming into existence under a draft rule, pointed out to Vaneesha Jain from Saikrishna Associates, referring to the proactive monitoring and takedowns rule. “No existing intellectual property law requires automated content filters,” she said. But the government can simply pass an ordinance to amend the IT Act, in case they can’t bring in proactive monitoring & takedowns via amending the rules, said Nikhil Pahwa, founder & editor of MediaNama. The proactive monitoring & takedown requirement is very vulnerable to a court challenge, which is why it’s actually a very inadvisable way to bring in this obligation, according to Vinay Kesari from Setu. Agreeing with this, Divij Joshi, an independent researcher, said “the rule won’t fly in the courts, it’s arbitrary, and unreasonable, and will be struck down. Prior restraint is permissible under the Indian constitution, but not this form of it for sure.” Possible solutions for proactive monitoring & removal On being asked whether the proactive monitoring requirement will impact SaaS platforms, Joshi said the rule could be enforced on all sorts of intermediaries would be impact, including SaaS and cloud service providers. There have been cases where solutions for proactive takedowns have been implemented, he said. For instance, “WhatsApp’s engineers are mooting an in-built solution for content moderation right, where they would build an automated system wherein the content doesn’t necessarily have to leave the user’s device, but specific phrases can be flagged within it. These can then be censored, flagged, or sent to some other authority. But largely, Section 3(9) was meant for social media services. And I genuinely hope that it’s not going to be that sound going to be there in the final rules.” — Divij Joshi How do we think about online harms, such as terrorism, child sexual abuse, and copyright infringements? There needs to be a system of graded harms, where vulnerable communities and speech that do no require too much contextual information comes at the top, according to Joshi. “Or when some speech is so harmful that it can be taken down even in the absence of context. Even when it comes to child sexual abuse images, machines cannot process contextual information the way humans can, as became obvious in the Napalm Girl incident ,” he recalled. How proactive takedowns would work (or not work) in copyright cases “If you look at harms of the most vulnerable communities in that space, I would err on the side of this false negative for something like this,” Joshi said. “But copyright on the other hand is a horrible way and place for the proactive monitoring and takedowns rule to be implemented,” he said, and this is because: “Firstly, everything on the internet is potentially copyrightable. Copyrightable is a norm that does not require registration or it does not require any active effort. Anything with a modicum of originality, or creativity, with some effort and some creativity will automatically be copyright as per Indian law. That means that these potentials filters apply to all content all over the internet.” “Secondly, assessing what is permissible under copyright regulations is a lot more contextual. It’s currently impossible to incorporate fair use and fair dealing standards into automated systems. So even the most sophisticated system, which costs upwards of like $30 million to implement — that is YouTube’s Content ID system — has taken down classical music that has been out of copyright for hundreds of years. This was done simply on the claim of Sony BMG that it had published a different recording of it 20 years ago. Thousands of videos get taken down because that’s simply how YouTube’s automated systems work. And then they don’t have any procedural safeguards against that.” “Isn’t copyright infringement online a solved problem to an extent, because like the world has come to a kind of consensus already?” asked Kesari. “Major global forces have come to a consensus on how to handle this. In fact, solutions for copyright infringement started being developed before solutions to take down child pornography. YouTube’s Content ID system started being developed in 2007. Microsoft’s PhotoDNA came out in 2009. We might not the solution, but the problem has been solved to some extent,” he said. But the question is, where in the intermediary chain do to you deploy action against copyright infringements?, asked Pahwa. “The Indian music industry wants liability not just for YouTube, but also for ISPs,” he said. Comparing the proactive takedowns requirement to Article 17 (earlier Article 13) of the EU Copyright Directive, Joshi pointed out that the EU’s directive includes multiple safeguards, including safeguards for what kind of companies are supposed to implement it, and even has post-facto procedural safeguards. “But a system like ours, which broadly says unlawful, contained, must be automatically taken down again is not the same as what in Article 17 obligation is,” he said. Considering factors affecting turnaround timeframe Torsha Sarkar from Centre for Internet & Society, who has researched the Rule 3(8) of the draft Intermediary Guidelines, said the turnaround time needs to rake into account certain nuances, such as classification of intermediaries by user base, revenue of companies, the kind of content that the intermediary is dealing with, the risks it is taking, etc,. Her research also looked into having a monitoring mechanism in the law, that can assess the efficacy of any turnaround time that the government is pushing for. “We also argued for notices and appeals because that provides a check to unnecessary censorship,” she said. “We talked about a gradation of content, so Shreya Singhal said that only any reasonable restrictions under Article 19(2) can form part of a legal government order. So we went one step forward and classified the content on Article 19(2) into the two categories: One timeframe for critical content, so sovereignty of the state, security of state etc Another timeframe for subjective speech elements like defamation etc Do we starting differentiating between Platforms and Intermediaries? The landmark Delhi HC judgment on Darveys v. Christian Louboutin laid out around 25 instances of when an intermediary can have actual knowledge. “In copyright law, the argument made is that YouTube is monetising pirated content through advertising; they know how much money is being made on each type of music, who is running those channels, who is getting paid for it. The argument is that the amount of knowledge they have then makes them no longer just an intermediary or a mere conduit,” explained Pahwa. Therefore, is “intermediary” effectively too wide a definition and does that need to be reconsidered into today’s day and age?, asked Pahwa. We have to probe the control that these platforms or intermediaries exercise on the actual activity of the platform, said Harshitha Thammaiah from Bytedance. This is also what the Justice Pratibha Singh’s judgement on judeg Darveys was getting at, according to her. “I’m not a big fan of that judgement, but she [Justice Pratibha Singh] has a point. The judgment is saying that if you’re exercising enough control on who’s buying what, and via targeted advertising etcetera, then you have knowledge. And because you have this knowledge, the question is should you still take the safe harbour under Section 79,” — Harshitha Thammaiah The majority of platforms today are absolutely not the same as intermediaries, at least in terms of how intermediaries are defined under various jurisdictions, according to Joshi. “Moderation is at the core of what platforms do. It is the only reason that we use platforms,” he said. “People who were on the recent Mastodon bandwagon might have noticed that the same kind of political process played out on a decentralized network. Mastodon implemented its own content moderation policy and threw out the Assam police, which I don’t think is justifiable. And this is not about it happening on one Mastodon instance, the point is that content moderation is inevitable on any platform,” he said. “There needs to be a decoupling where we need to look at regulation and liability issue separately”, he concluded. Defending platforms, Sachin Dhawan from Facebook, said that “even mainstream media platforms are moderating, they’re gate-keeping what gets said, they’re determining, what we can see and have access to, how headlines are framed.” According to him: “Whether we think about anonymity and journalists, whistle-blowers, or vulnerable groups like the gay community, I’d request people to think where the vulnerable, the minorities, the voiceless would go, if they’re not going to be represented accurately in mainstream media platforms. Do we not then need safe harbour, for intermediaries, to promote their content?” — Sachin Dhawan But platforms carry out similar activities of exercising control through algorithms, filter bubbles, and prioritising content for users, pointed out Pahwa. “Shouldn’t there be a greater responsibility and accountability? Because the problem with the platform ecosystem is that there is a gap between the responsibility and accountability of platforms,” he asked. Should we be classifying intermediaries as significant or otherwise? How? There are clear trends of regulation moving from horizontal to vertical wherein responsibilities have been shifting from MEITY to sectoral regulators, such as the RBI for payments gateways, I&B ministry for online content regulation, Health ministry for e-pharmacies, and so on, said Pahwa. Does this create ground for differential regulation with different intermediaries? Should this be done by type, or by size? he asked. There should be a blanket rule for everyone, said Sethia of Udhaar. Additionally, the scale level for start-ups obviously will be very difficult. “Anything above 1 million [users] should be considered, otherwise it will again stifle innovation. Because if I have to start-up a content business or something which has user-generated content, I’ll have to put in all those efforts of moderation, and before that I can’t even go live,” he said. When you register a company, you also have to say what kind of company it is, and then you are allowed to do certain things, and not allowed to do other things, explained Santosh Panda from Explara. “If you go industry-wise, it’s going to stop people from being creative. People will start going to Singapore, just as they have for cryptocurrencies,” he added. There are two ways I see this complaint of [regulation harming] innovation coming in., said Siddharth Manohar from Aapti Institute. “One being that if you have a threshold of a user base or revenue size and that’s too low, then you will stifle innovation because start-ups need to get off the ground,” he explained. “I think they should be harms focused right now,” he said. “I would advocate for sectoral regulators taking a focus on regulating against harms and also a more centralised approach to fill in the gaps, where there aren’t sector specific regulations to fill in the harms.” “In that context, an innovator’s job is still to innovate, is still to provide services, but they know pitfalls, that these other harms that you have to steer clear off to avoid penalisation. We shouldn’t just shy away from regulation on this excuse that it’s discouraging innovation. It’s not, that’s the point of innovation, right? You overcome.” — Siddharth Manohar The size criterion may also be relevant, in so far as it affects the ability of intermediaries to push back and to take stand in cases, said Manish Sabharwal from Centre for Policy Research. A case in point is the Subodh Gupta case in the Delhi HC, wherein the court had asked Instagram to provide details of an anonymous account, Sabharwal said. “Facebook took a stand in court saying that this would violate the privacy of users, etc. etc. But it’s doubtful that an intermediary not of Facebook’s scale would be able to respond in the same manner. Should therefore differing standards be applied to intermediaries based on the sizes of the platform? Because in a lot of these circumstances, it is effectively the intermediary that channelizes users’ interests,” he concluded. Applicability of global takedowns: Baba Ramdev in Delhi HC “It’s actually easier for a platform to do global take downs rather than localised take downs,” said Joshi. “The Google Right to be Forgotten judgment gave leeway for countries to mandate global take downs if they so require. The Delhi HC case is much less nuanced and basically orders global take down of defamatory content on Baba Ramdev. It’s becoming a race to the bottom, it started with a trademark case in Canada involving Google, then the CJEU judgment, and now it’s happening in India,” he said. “The issue with the Baba Ramdev case is that Indian courts have till now assumed universal jurisdiction of the internet,” he explained. “The standard that the Delhi HC had applied is that if something is accessible in India, Indian courts have jurisdiction over it, which is an incredibly dangerous standard for what we international judicial comity but is a bad idea of the politics of the internet.” — Divij Joshi In addition to the content being accessible in India, the second standard applied was that the content was uploaded also in India, explained Shodhan Babu. The Delhi HC also relied on the Austrian defamation case, the court will try to restore status quo ante – as in what was the position before this was uploaded in India. So that that seems to be the basis for ordering a global takedown,” he explained. “The court has not gone into where it is illegal merely because it is accessible in India. Facebook and all the other entities that admitted that this was uploaded in India, and also conceded to blocking this in India. The only question that was argued over whether it should be a global or not a on a legal analysis, I don’t see any fault with the reasoning. It, you may disagree with the result that ultimately float from the trauma public policy perspective, but purely on the analysis of the act. I don’t think the reasoning is faulty,” he said. * Read our coverage of our discussion on Intermediary Liability: The Way Forward in Bangalore on November 22, 2019. This discussion was held with support from Facebook.

Explara Frequently Asked Questions (FAQ)

  • Where is Explara's headquarters?

    Explara's headquarters is located at #18, 2nd Floor, 80ft. Road, Bengaluru.

  • What is Explara's latest funding round?

    Explara's latest funding round is Seed VC - II.

  • How much did Explara raise?

    Explara raised a total of $490K.

  • Who are the investors of Explara?

    Investors of Explara include Rajan Anandan, Blume Ventures, Hyderabad Angels, Ness Wadia, Krishna Lakamsani and 7 more.

Discover the right solution for your team

The CB Insights tech market intelligence platform analyzes millions of data points on vendors, products, partnerships, and patents to help your team find their next technology solution.

Request a demo

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.