All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Here is a basic tstats search I use to check network traffic.     | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where All_Traffic.src IN ("11.2... See more...
Here is a basic tstats search I use to check network traffic.     | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where All_Traffic.src IN ("11.2.2.1","11.2.2.2","11.2.2.3") by All_Traffic.src, All_Traffic.dest, All_Traffic.action, All_Traffic.dest_port, All_Traffic.bytes, sourcetype | sort -count     I have a lookup file called "ip_ioc.csv" containing a single column of IPv4 addresses which constitute potential bad actors.   Instead of searching through a list of IP addresses as per above, I want the tstats search to check the lookup file. How can I modify the above search? Here is a terrible and incorrect attempt at what I am trying to perform:     | tstats count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.action, All_Traffic.dest_port, All_Traffic.bytes, sourcetype | lookup ip_ioc.csv ip_ioc | where ip_ioc == All_Traffic.src OR ip_ioc == All_Traffic.dest      
Hello, need a quick help... I am working on locking down one of my Splunk applications. I do not want any user/power user roles able to make any changes in the UI to any Knowledge Objects... I would ... See more...
Hello, need a quick help... I am working on locking down one of my Splunk applications. I do not want any user/power user roles able to make any changes in the UI to any Knowledge Objects... I would rather have the code checked in SVN (Subversion) and not directly on the UI... Any suggestions would be great! Thank you.
Hello,  I have an app that is ready to be uploaded to Splunk Base. The App has passed App Inspect and is ready to go. When I tried to upload it to SplunkBase I received this error: "Unable to Ex... See more...
Hello,  I have an app that is ready to be uploaded to Splunk Base. The App has passed App Inspect and is ready to go. When I tried to upload it to SplunkBase I received this error: "Unable to Extract Package" I packaged the app using the Splunk's Packaging Toolkit. Can someone tell me what is going on?  Thank You Mark  
I'm trying to check the value of a token and if it is equal to "X" change it to an * but if it is equal to anything else, leave the token alone.  I'm trying something like this but not sure it is pos... See more...
I'm trying to check the value of a token and if it is equal to "X" change it to an * but if it is equal to anything else, leave the token alone.  I'm trying something like this but not sure it is possible. <drilldown> <eval token='my_token'> if("X", "*", $my_token$)</eval> <link target="_blank">search?q= my search...my_field=$my_token$.....blah blah blah...  </link> </drilldown> Thanks.
Hi Everyone, I have one requirement: I have one date drop down like this: <input type="time" token="field1"> <label>Date/Time</label> <default> <earliest>-7d@d</earliest> <latest>now</latest> ... See more...
Hi Everyone, I have one requirement: I have one date drop down like this: <input type="time" token="field1"> <label>Date/Time</label> <default> <earliest>-7d@d</earliest> <latest>now</latest> </default> </input> And one panel query as below: I need to know how can I pass $field1.earliest$ and $field1.latest$ in nowdate.(highlighted in bold) <chart> <search> <query>|inputlookup JOB_MDJX_CS_EXTR_STATS_2.csv|append [ inputlookup JOB_MDJX_CS_EXTR_STATS_2_E2.csv]|append [ inputlookup JOB_MDJX_CS_EXTR_STATS_2_E3.csv]|where Environment="$Env$"|where JOBFLOW_ID LIKE "%$Src_Jb_Id$%"|eval Run_date1="20".RUNDATE2 |eval Run_Date=strptime(Run_date1,"%Y%m%d") |eval nowdate=relative_time(now(), "-7d@d")|fieldformat nowdate=strftime(nowdate,"%d/%b/%Y") |fieldformat Run_Date=strftime(Run_Date,"%d/%b/%Y") |where Run_Date&gt;=nowdate|stats sum(REC_COUNT) by Run_Date</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest>
I am looking to have a eval search that looks for a field name of "Name" and adds the value. If the field doesn't exist, I want to add a field of "Name" and add "N/A" for the data.    | eval Name =... See more...
I am looking to have a eval search that looks for a field name of "Name" and adds the value. If the field doesn't exist, I want to add a field of "Name" and add "N/A" for the data.    | eval Name = if((like(Name,"*"))),"&Name&","N/A")   This might be the wrong way of doing it.   Event example #1: Hostname Time Name Action Server02 11:22am jdoe logon Server20 1:30pm jsmith logon   Event example #2: Hostname Time Action   Workstation 10:45am Saved   Workstation 100 12:30pm Saved       After the search is run I want the data to look like this.   Hostname Time Name Action Server02 11:22am jdoe logon Server20 1:30pm jsmith logon Workstation 10:45am N/A Save Workstation 100 12:30pm N/A Save        
Hi guys, I'm trying to create a search that triggers an alert every time a user has been signed out of their o365 session, however, I am unable to identify which is the correct workload. I'd like t... See more...
Hi guys, I'm trying to create a search that triggers an alert every time a user has been signed out of their o365 session, however, I am unable to identify which is the correct workload. I'd like to clarify that I currently do not have access to the o365 splunk add-on (and it probably won't be installed anytime soon). Which workload do I need to use if I need to identify activity performed in the o365 admin portal? I initially thought it would be index=o365 sourcetype=o365:management:activity Workload=SecurityComplianceCenter but it doesn't seem to show me anything related to sessions that have been signed out. Any useful feedback would be much appreciated.   Thanks!
I am working on with two different data types and some of which have a field of CVE and others don't have a field of CVE.  I have a text input on a dashboard that is working for all events that have... See more...
I am working on with two different data types and some of which have a field of CVE and others don't have a field of CVE.  I have a text input on a dashboard that is working for all events that have a CVE field but I also want all of the other events that don't have a field named CVE.   Basically I am wanting to search the field of "CVE" if the field exist and if it doesn't, then the field is overlooked.   Thank you ahead of time!   
Hi Everyone, I have one requirement where I need to combine 3 csv with 3 csv for Environment E1,E2 and E3 . I have made the query like this: |inputlookup JOB_MDJX_CS_STATS_2.csv|append [ inputlooku... See more...
Hi Everyone, I have one requirement where I need to combine 3 csv with 3 csv for Environment E1,E2 and E3 . I have made the query like this: |inputlookup JOB_MDJX_CS_STATS_2.csv|append [ inputlookup JOB_MDJX_CS_STATS_2_E2.csv]|append [ inputlookup JOB_MDJX_CS_STATS_2_E3.csv]|join type=outer JOBFLOW_ID [ inputlookup JOB_MDJX_CS_MASTER.CSV ] |join type=outer JOBFLOW_ID [ inputlookup JOB_MDJX_CS_MASTER_E2.csv ] |join type=outer JOBFLOW_ID [ inputlookup JOB_MDJX_CS_MASTER_E3.csv]|where Environment="E3"|eval y=20|eval Run_date1= y."".RUNDATE2|eval Run_Date=strftime(strptime(Run_date1,"%Y%m%d"),"%d/%m/%Y")|eval nowdate=strftime(relative_time(now(), "-7d@d" ), "%d/%m/%Y")|stats sum(JOB_EXEC_TIME) as TotalExecTime by JOBFLOW_ID |eval TotalExecTime=round(TotalExecTime,2)|sort -TotalExecTime limit=10 The issue I am facing is its showing the data only for E3 even when I am putting environment as E1 or E2 despite there is Data in E1 and E2. Can someone guide me what's wrong in my query. Thanks in advance
Hi all, This app was previously working but now has a constant 'loading...' screen. Nothing loads other than the App's menu strip. This is the same in anything which isn't the 'Documentation' or 'Tr... See more...
Hi all, This app was previously working but now has a constant 'loading...' screen. Nothing loads other than the App's menu strip. This is the same in anything which isn't the 'Documentation' or 'Troubleshooting' (which is really just the search) area. When clicking into these non-loading areas of the app the whole Splunk interface seems to change slightly... Almost like Splunk has been downgraded by several versions or similar. The 'Microsoft Azure Add-on for Splunk' is on version is 3.0.1 Anyone seen this kind of thing before? Search & Reporting menu Broken Azure app menu
I have built a query that exports data by a date range and based on a scan or source. Currently I'm grouping them into a multivalued field so that I can have 1 row per source and can easily see all t... See more...
I have built a query that exports data by a date range and based on a scan or source. Currently I'm grouping them into a multivalued field so that I can have 1 row per source and can easily see all the impacted data by date range (see below): Source Within 30 Days 30-60 Days 60-90 Days 90+ Days scan1.csv Low SSL/TLS Diffie-Hellman Modulus <= 1024 Bits (Logjam) 192.168.0.1 Medium SSL Certificate Cannot Be Trusted 192.168.0.1   Medium SMB Signing not required 192.168.0.15   Low SSL/TLS Diffie-Hellman Modulus <= 1024 Bits (Logjam) 192.168.0.15 Medium SSL Self-Signed Certificate 192.168.0.1   Medium SMB Signing not required 192.168.0.1     Medium SSL Certificate Cannot Be Trusted 192.168.0.15     As you can see above with the Logjam finding, I'm repeating the 'Severity' and 'Plugin Name' respectively with a unique IP address. I'd like, however, to only see the new IP address in this instance (see below) but have been unable to find a way to group by 'severity' and 'plugin_name' within the multivalued group.  desired output Using the test data below, you can use this search string to reproduce the 1st table above:   | inputlookup test.csv | eval epochTime=now()-firstSeen | eval dateRange=case(epochTime <= (86400*30), "Within 30 Days" , epochTime >= (86400*31) AND epochTime <= (86400*60), "30-60 Days", epochTime >= (86400*61) AND epochTime <= (86400*90), "60-90 Days", epochTime > (86400*91), "90+ Days") | dedup severity plugin_name ipv4 | sort severity plugin_name ipv4 | eval findings30days=case(dateRange=="Within 30 Days", mvappend(severity, plugin_name, ipv4)) | eval findings3060days=case(dateRange=="30-60 Days", mvappend(severity, plugin_name, ipv4)) | eval findings6090days=case(dateRange=="60-90 Days", mvappend(severity, plugin_name, ipv4)) | eval findingsOver90days=case(dateRange== "90+ Days", mvappend(severity, plugin_name, ipv4)) | rename source AS Scan, severity AS Severity, plugin_name AS "Plugin Name", dateRange AS Age, ipv4 AS IP | stats list(findings30days) AS "Within 30 Days" list(findings3060days) AS "30-60 Days" list(findings6090days) AS "60-90 Days" list(findingsOver90days) AS "90+ Days" by Scan   Below is some sample data that may be useful: source                     severity   ipv4                         plugin_name                                                                           firstSeen scan1.csv                 Medium  192.168.0.1             SSL Certificate Cannot Be Trusted                                          1617212174 scan1.csv                 Medium  192.168.0.15           SSL Certificate Cannot Be Trusted                                          1617212174 scan1.csv                 Medium  192.168.0.1             SSL Self-Signed Certificate                                                     1619459159 scan1.csv                 Medium  192.168.0.1             SSL Certificate with Wrong Hostname                                   1619459159 scan1.csv                 Medium  192.168.0.15           SSL Self-Signed Certificate                                                     1617212174 scan1.csv                 Medium  192.168.0.15           SSL Certificate with Wrong Hostname                                   1617212174 scan1.csv                 Medium  192.168.0.1             TLS Version 1.0 Protocol Detection                                        1619459159 scan1.csv                 Medium  192.168.0.15           TLS Version 1.0 Protocol Detection                                        1619459159 scan2.csv                 Medium  192.168.0.2             TLS Version 1.0 Protocol Detection                                        1619459159 scan2.csv                 Medium  192.168.0.2             SSL Certificate with Wrong Hostname                                   1619459159 scan2.csv                 Medium  192.168.0.2             SSL Certificate Cannot Be Trusted                                          1619459159 scan2.csv                 Medium  192.168.0.3             TLS Version 1.0 Protocol Detection                                        1619459159 scan2.csv                 Medium  192.168.0.3             SSL Certificate with Wrong Hostname                                   1620053764 scan2.csv                 Medium  192.168.0.3             SSL Certificate Cannot Be Trusted                                          1620053764 scan1.csv                 Medium  192.168.0.1             SSL RC4 Cipher Suites Supported (Bar Mitzvah)                    1620053764 scan1.csv                 Medium  192.168.0.1             SSL Medium Strength Cipher Suites Supported (SWEET32) 1620053764 scan1.csv                 Low         192.168.0.15           SSL/TLS Diffie-Hellman Modulus <= 1024 Bits (Logjam)       1620053764 scan1.csv                 Low         192.168.0.1             SSL/TLS Diffie-Hellman Modulus <= 1024 Bits (Logjam)       1620053764 Current output in Splunk is below: current output
Hi, I am trying to set up the email settings in Splunk with Mimecast but it is not working. eu-smtp-outbound-1.mimecast.com:587 Where I can find the logs of the issue? Regards Johana
Hi there, Ive expanded the data age vs frozen age dash into search and want to get an overview of the whole platform listing each index with its oldest data etc. | rest splunk_server_group=dmc_gr... See more...
Hi there, Ive expanded the data age vs frozen age dash into search and want to get an overview of the whole platform listing each index with its oldest data etc. | rest splunk_server_group=dmc_group_indexer splunk_server_group="*" /services/data/indexes/ | join title splunk_server index type=outer [| rest splunk_server_group=dmc_group_indexer splunk_server_group="*" /services/data/indexes-extended/] | eval bucketCount = coalesce(total_bucket_count, 0) | eval eventCount = coalesce(totalEventCount, 0) | eval coldBucketSize = coalesce('bucket_dirs.cold.bucket_size', 'bucket_dirs.cold.size', 0) | eval coldBucketSizeGB = round(coldBucketSize/ 1024, 2) | eval coldBucketMaxSizeGB = if(isnull('coldPath.maxDataSizeMB') OR 'coldPath.maxDataSizeMB' = 0, "unlimited", round('coldPath.maxDataSizeMB' / 1024, 2)) | eval coldBucketUsageGB = coldBucketSizeGB." / ".coldBucketMaxSizeGB | eval homeBucketSizeGB = coalesce(round((total_size - coldBucketSize) / 1024, 2), 0.00) | eval homeBucketMaxSizeGB = round('homePath.maxDataSizeMB' / 1024, 2) | eval homeBucketMaxSizeGB = if(homeBucketMaxSizeGB > 0, homeBucketMaxSizeGB, "unlimited") | eval homeBucketUsageGB = homeBucketSizeGB." / ".homeBucketMaxSizeGB | eval dataAgeDays = coalesce(round((now() - strptime(minTime,"%Y-%m-%dT%H:%M:%S%z")) / 86400, 0), 0) | eval frozenTimePeriodDays = round(frozenTimePeriodInSecs / 86400, 0) | eval frozenTimePeriodDays = if(frozenTimePeriodDays > 0, frozenTimePeriodDays, "unlimited") | eval freezeRatioDays = dataAgeDays." / ".frozenTimePeriodDays | eval indexSizeGB = if(currentDBSizeMB >= 1 AND totalEventCount >=1, round(currentDBSizeMB/1024, 2), 0.00) | eval maxTotalDataSizeGB = round(maxTotalDataSizeMB / 1024, 2) | eval indexMaxSizeGB = if(maxTotalDataSizeGB > 0, maxTotalDataSizeGB, "unlimited") | eval indexSizeUsageGB = indexSizeGB." / ".indexMaxSizeGB | eval indexSizeUsagePerc = if(isNum(indexMaxSizeGB) AND (indexMaxSizeGB > 0), round(indexSizeGB / indexMaxSizeGB * 100, 2)."%", "N/A") | eval total_raw_size = coalesce(total_raw_size, 0) | eval avgBucketSize = round(indexSizeGB / bucketCount, 2) | fields title, splunk_server, freezeRatioDays, indexSizeUsageGB, homeBucketUsageGB, coldBucketUsageGB, eventCount, bucketCount, avgBucketSize | rename splunk_server as "Indexer" freezeRatioDays as "Data Age vs Frozen Age (days)" indexSizeUsageGB as "Index Usage (GB)" homeBucketUsageGB as "Home Path Usage (GB)" coldBucketUsageGB as "Cold Path Usage (GB)" eventCount as "Total Event Count" bucketCount as "Total Bucket Count" avgBucketSize as "Average Bucket Size (GB)" I have the index name etc listed, but how do i group all the indexers so that the indexes are not split by each indexer, but rather just one "overall" view by index? Thanks!
I have Splunk in the below design One HF to two sperate indexers that are not clustered.  I have UF installed on my workstation and UF is sending logs to the HF. HF has data inputs set to all type... See more...
I have Splunk in the below design One HF to two sperate indexers that are not clustered.  I have UF installed on my workstation and UF is sending logs to the HF. HF has data inputs set to all types of windows logs to an index to windows and this is going to indexer A but data not going to indexer B. My outputs.conf in the UF is like [tcpout] useACK = true maxQueueSize = auto readTimeout = 300 [tcpout:abchf] server = 10.20.30.40:9997 compressed = TRUE sslRootCAPath = C:/Program Files/SplunkUniversalForwarder/etc/apps/test sslCertPath = C:/Program Files/SplunkUniversalForwarder/etc/apps/test sslPassword = Password sslVerifyServerCert = true sslCommonNameToCheck = google.com In the indexer A I can see the data is ingested but in the indexer B I cannot see the data. As I mentioned earlier indexer A and indexer B are not clustered indexers.
Is there any way I can use a dropdown to specify days but have it be the same time period ever day?   So for example, it would appear in the dropdown like: Mon - 9am to 5pm Tue - 9am to 5pm Wed ... See more...
Is there any way I can use a dropdown to specify days but have it be the same time period ever day?   So for example, it would appear in the dropdown like: Mon - 9am to 5pm Tue - 9am to 5pm Wed - 9am to 5pm Thurs- 9am to 5pm Fri - 9am to 5pm and have it reference those days and times ?
Greetings, I have a running Splunk instance inside a docker container. I am able to login through Splunk UI with admin:<password> can see splunk and . But when I run the command node sdkdo tests-b... See more...
Greetings, I have a running Splunk instance inside a docker container. I am able to login through Splunk UI with admin:<password> can see splunk and . But when I run the command node sdkdo tests-browser, it opens up a page and even If I enter the same credentials as I entered before sometimes I get login error sometimes I get splunkjs not found in tests.browser.html   Thanks for any kind of help
Hi There, So, the scenario is that we have a central syslog server which receives syslog messages from different servers in the organization.  When the syslog data is received on the syslog server,... See more...
Hi There, So, the scenario is that we have a central syslog server which receives syslog messages from different servers in the organization.  When the syslog data is received on the syslog server, there is a universal forwarder agent on the syslog server that forwards it to Splunk. The issue is that some servers are using UTC time zone and therefore the syslog data from those servers contains UTC Timestamp.  Is there a way to change how the forwarder interprets the syslog data file received from those servers? I've tried editing TZ=UTC and TZ-ALIAS=UTC in props.conf with the source stanza specifying the path for those specific log files that have UTC Timestamp in events. However in Splunk I still see those events with the UTC Time Stamp.  This is an issue due to which we can't properly search. Any advice would be much appreciated. Thanks.
WARN FilesystemChangeWatcher - error getting attributes of path "C:\pagefile.sys": The process cannot access the file because it is being used by another process.
Hi, I am trying to create a new MySQL Connection in DB Connect Driver installed : 5.1 Application using JDK7 and java MySQL connector version - 5.1.18 User used for creating this connection is R... See more...
Hi, I am trying to create a new MySQL Connection in DB Connect Driver installed : 5.1 Application using JDK7 and java MySQL connector version - 5.1.18 User used for creating this connection is Read-Only When I add all the required fields and click on save, it gives the "there was an error processing your request" message. File splunk_app_db_connect_server.log shows the below error:   2021-05-17 11:06:28.239 +0000 [dw-2337144 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: f78a9008d0c2e4bb java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at sun.reflect.GeneratedMethodAccessor266.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748)    Can you please suggest how this can be resolved? Edit 1: on some of the other answers, I read it could be because of invalid db user creds/access rights issue but I have tries login using the same creds through MySQL Workbench and it works fine. Thank you.
I was able to change it through the developer tools feature in Chrome which is temporary.  I wish to change the message  "No Results found"  to "Not Applicable" in all the panels; Same panel in the ... See more...
I was able to change it through the developer tools feature in Chrome which is temporary.  I wish to change the message  "No Results found"  to "Not Applicable" in all the panels; Same panel in the below screenshot:   Achieved it through developer tools feature as below; But how do i make it permanent ? Could someone please help on what lines of code am i to add in my source code so "Not Applicable" is displayed? Thanks in advance!