All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I found that the count with the 'Other' slice in the pie graph visualization available in splunk is not actually the count of events. From documentation I found that it is the count of 'Slices' with... See more...
I found that the count with the 'Other' slice in the pie graph visualization available in splunk is not actually the count of events. From documentation I found that it is the count of 'Slices' with less contribution.  Is there any way to print the count of events with the 'Other' slice? Checked documentations and forums. Couldnt find any anything.
I'm running Splunk 8.2.2 in a docker container. I'm using a separate app with a scripted input to get data into Splunk via a bash script. That script works perfectly, except when the source API scr... See more...
I'm running Splunk 8.2.2 in a docker container. I'm using a separate app with a scripted input to get data into Splunk via a bash script. That script works perfectly, except when the source API screws up, or when I delete the index and need to backfill all of the previous data. The scripted input is setup in inputs.conf as: [script://$SPLUNK_HOME/etc/apps/app/bin/app.sh] interval = */5 * * * *   Is there a way to manually run a script one time and have splunk consume the output?  I'd really like to avoid setting up a regular monitor, and have splunk consume a regular file just for a backfill operation.  I'd also like to avoid modifying the working scripted input.   Thank you for any suggestions you can provide.
I want use app F5 Network analytics to monitor vs, pool. I installed app F5 Network in splunk but no traffic display. Pls, guide me config to see tra  
I am having a Timeline visualization where Date wise, engine wise status is displayed for an analytic, whether the execution is success or failure. Currently it is represented in the form of bubbles ... See more...
I am having a Timeline visualization where Date wise, engine wise status is displayed for an analytic, whether the execution is success or failure. Currently it is represented in the form of bubbles but then need to represent the status in the form of vertical lines. If there is a way could anyone help me on this?
Hi, I am having a Timeline visualization where Date wise, engine wise status is displayed for an analytic, whether the execution is success or failure. Now I need to display a table on clicking an e... See more...
Hi, I am having a Timeline visualization where Date wise, engine wise status is displayed for an analytic, whether the execution is success or failure. Now I need to display a table on clicking an event in the visualization by passing analytic name, execution date time and engine number. Could you please help me with creating a drill down table for this? Thanks
I need help with regex for parsing a URL   The URL can be  a/b/c, a/b/c/d or a/b/c/d/e In any case I need the resource of the URL -  c d e respectively for the above URLs. Can you help with a rege... See more...
I need help with regex for parsing a URL   The URL can be  a/b/c, a/b/c/d or a/b/c/d/e In any case I need the resource of the URL -  c d e respectively for the above URLs. Can you help with a regex which can give the value after the last /   
Need help on creating a list of all UFs, HFs, their versions please. We have them on Windows & RHEL boxes. Thank u
Hi folks, I'm trying to append multiple field values to a csv as a result of a search. The csv file contains a list of seen hashes.  I have the following query: index=AV NOT([ | inputlookup Hashes... See more...
Hi folks, I'm trying to append multiple field values to a csv as a result of a search. The csv file contains a list of seen hashes.  I have the following query: index=AV NOT([ | inputlookup Hashes.csv | stats values(hashes) AS search| format ]) So, the question here is, how can I add the resulting hash values from the previous search into the CSV? I already try with the following qery with no results: | foreach * [|append [makeresults | eval hashes=file_hash] | fields hashes | outputlookup Hashes.csv]
How can i create a scheduled report that runs every hour and makes GET requests to fetch data from an open source.   basically querying the same page and updating latest data available on premises.
I am looking for a way to limit user searches to only the most recent 30 days, specifically for SmartStore purposes.  For example, if you have smartStore, you want to restrict who can search old dat... See more...
I am looking for a way to limit user searches to only the most recent 30 days, specifically for SmartStore purposes.  For example, if you have smartStore, you want to restrict who can search old data. Otherwise, some run-away users could cause your indexers to dump recent data.   Configuring Roles / Resources / "Role search time window limit" does not do this because it designates a window rather than the the most recent time. A user could still search "-30d to -60d", then "60d to -90d", etc, which results in dumping your recent data. Also, this method fails quietly (if a user searches 31 days, they have to click "jobs" to find out their search was limited - that's not good!). Configuring Roles / Restrictions / Adding "earliest = -30d" does not do this because it overwrites the UI completely (i.e. users can no longer search just the last 60 minutes).  The closest I have come to controlling this is adding Roles / Restrictions  "_index_earliest = -30d" but you can still trigger smartStore downloads.  I am asking this question believing there isn't currently an answer but I'd like to at least bring awareness. 
Hello! I am troubleshooting a report, and I've cut it all down to the very basics with the following two snippets. Basically, 'join' with a csv is not returning expected results. This dataset between... See more...
Hello! I am troubleshooting a report, and I've cut it all down to the very basics with the following two snippets. Basically, 'join' with a csv is not returning expected results. This dataset between Sept-1 and Sept-2 has about 75,000 unique entries (but the base search with "value=374667" only has about 30!).     index="xxx" sourcetype="xxx" value="374667" timeformat="%Y-%m-%d" earliest="2021-09-01" latest="2021-09-02" | join value [inputlookup lookup.csv] | dedup value | chart count     The above query returns 0 (incorrect).     index="xxx" sourcetype="xxx" value="374667" timeformat="%Y-%m-%d" earliest="2021-09-01" latest="2021-09-02" | dedup value | chart count     The above query returns 1 (expected).
Hello Splunkers, I am looking for an html page in a dashboard with ID, ID_Name, an other fields with Text box, dropdown and other inputs. Something like below/Attached example. I am planning to crea... See more...
Hello Splunkers, I am looking for an html page in a dashboard with ID, ID_Name, an other fields with Text box, dropdown and other inputs. Something like below/Attached example. I am planning to create a form where people can enter details and store everything into a kv store using outputlookup. Example: ID     ID_Name     Submit
When trying to start analytics agent as a machine agent extension for log analytics, the agent send an eception: localhost==> [system-thread-0] 08 Sep 2021 13:07:23,907 INFO InProcessLauncherTask - ... See more...
When trying to start analytics agent as a machine agent extension for log analytics, the agent send an eception: localhost==> [system-thread-0] 08 Sep 2021 13:07:23,907 INFO InProcessLauncherTask - Starting to execute actual task with parameters [{csvMethodArgs=/opt/appdynamics/machine-agent/monitors/analytics-agent/conf/analytics-agent.properties, methodName=main, className=com.appdynamics.analytics.agent.AnalyticsAgent}] localhost==> [system-thread-0] 08 Sep 2021 13:07:25,444 FATAL InProcessLauncherTask - Error occurred while attempting to load the task [{csvMethodArgs=/opt/appdynamics/machine-agent/monitors/analytics-agent/conf/analytics-agent.properties, methodName=main, className=com.appdynamics.analytics.agent.AnalyticsAgent}] java.lang.reflect.InvocationTargetException: null at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_292] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_292] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_292] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_292] at com.singularity.ee.agent.systemagent.task.util.InProcessLauncherTask.launch(InProcessLauncherTask.java:186) ~[machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.task.util.InProcessLauncherTask.execute(InProcessLauncherTask.java:150) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.task.analytics.AnalyticsAgentLauncher.execute(AnalyticsAgentLauncher.java:68) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.MonitorTaskRunner.runTask(MonitorTaskRunner.java:149) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.ScheduledTaskRunner.run(ScheduledTaskRunner.java:41) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.ManagedMonitorDelegate.setupEnvTask(ManagedMonitorDelegate.java:272) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.ManagedMonitorDelegate.initializeMonitor(ManagedMonitorDelegate.java:212) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.NodeMonitorManager.readConfig(NodeMonitorManager.java:178) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.NodeMonitorManager.startAllMonitors(NodeMonitorManager.java:265) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.NodeMonitorManager.<init>(NodeMonitorManager.java:79) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.AgentMonitorManager.<init>(AgentMonitorManager.java:63) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.Agent.setupMonitorManager(Agent.java:486) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.Agent.startServices(Agent.java:393) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.SystemAgent.startServices(SystemAgent.java:79) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.Agent.start(Agent.java:378) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.appdynamics.agent.sim.legacy.DefaultLegacyAgentRegistrationStateManager$1.run(DefaultLegacyAgentRegistrationStateManager.java:80) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_292] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_292] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_292] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_292] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_292] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_292] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292] Caused by: com.appdynamics.common.framework.util.EmbeddedModeSecurityManager$SuppressedSystemExitException: System exit cannot be performed here as the application is running in embedded mode at com.appdynamics.common.framework.util.EmbeddedModeSecurityManager.checkExit(EmbeddedModeSecurityManager.java:65) ~[analytics-shared-common-21.7.0-2292.jar:?] at java.lang.Runtime.exit(Runtime.java:108) ~[?:1.8.0_292] at java.lang.System.exit(System.java:973) ~[?:1.8.0_292] at io.dropwizard.Application.onFatalError(Application.java:130) ~[dropwizard-core-2.0.17.jar:2.0.17] at io.dropwizard.Application.onFatalError(Application.java:117) ~[dropwizard-core-2.0.17.jar:2.0.17] at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_292] at io.dropwizard.Application.run(Application.java:94) ~[dropwizard-core-2.0.17.jar:2.0.17] at com.appdynamics.common.framework.AbstractApp.callRunServer(AbstractApp.java:270) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingFile(AbstractApp.java:264) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:251) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:171) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:150) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.analytics.agent.AnalyticsAgent.main(AnalyticsAgent.java:101) ~[analytics-agent.jar:?] ... 27 more Thanks in advance for your help....
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do thin... See more...
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do things, so bear with me. I am attempting to creat a report/search/dashboard that looks over the last four hours and will display the largest percent increase of a value. The field is BIN currently stored as a numerical value, I have tried the tostring command to transform it but usually ends up as no values being returned or them all being grouped together. But I digress, how would I first create a search/table view that would be updated along a described timeframe lets say every hour where it looks at the previous timeframe as a percentage of total records for that timeframe and calculates the percentage increase of the two timeframes and filters to see the top 20 increases? Example: I would want to ignore any decreasing values and possibly only see the top 20 that had increased that are greater than or equal to a 15% increase.   BIN Percent 2 hour ago Percent 1 hr ago Pecent change 123456 10% 12% 16.7% 234561 10% 8% -25% 345612 30% 25% -20% 456123 35% 30% -16.7% 561234 15% 25% 40%  
Hi,   I am trying to make use of this dashboard from this forum thread: Solved: Ever wonder which dashboards are being used and wh... - Splunk Community but I am running into an error saying "Une... See more...
Hi,   I am trying to make use of this dashboard from this forum thread: Solved: Ever wonder which dashboards are being used and wh... - Splunk Community but I am running into an error saying "Unexpected close tag" on this line:   <query>index="_internal" user!="-" sourcetype=splunkd_ui_access "en-US/app" | rex field=referer "en-US/app/(?<app>[^/]+)/(?<dashboard>[^?/\s]+)" | search dashboard!="job_management" dashboard!="dbinfo" dashboard!="*en-US" dashboard!="search" dashboard!="home"   Please advise.   Also I am adding this directly to the Source of a dashboard instead of a search is that right?
Having an issue that Splunk doesn't build my knowledge bundles. My setup: One indexer cluster and two standalone search heads (no SH cluster). Both search heads use indexer discovery and the setup us... See more...
Having an issue that Splunk doesn't build my knowledge bundles. My setup: One indexer cluster and two standalone search heads (no SH cluster). Both search heads use indexer discovery and the setup used to work fine. Until recently the knowledge bundle of one of the two search heads stopped getting updated on the indexers. I observe the following: All indexers always have an up to date knowledge bundle from the first search head in /opt/splunk/var/run/searchpeers, while the bundle from the second search head no longer gets updated and is outdated. When running "splunk show bundle-replication-config" on the two search heads, both show an identical config When running "splunk show bundle-replication-status", one search head shows a fully functional replication, while the other search head states "No knowledge bundle replication cycle status is available yet." The search head that shows the error with the replication cycle status has no local knowledge bundle in /opt/splunk/var/run/ (while the other search head indeed has it). Therefore I guess that there's not a problem on the channel between search head and indexer, but some interna on the search head is dysfunctional and no longer builds the bundles in the first place. I did all the usual checks (reboot, filesystem permissions, btool check, ...). On the broken search head, I moved all local apps out of SPLUNK_HOME/etc/apps and emptied SPLUNK_HOME/etc/users and restarted, but the knowledge bundle still wasn't getting build. In log.cfg on the SH I set DistributedBundleReplicationManager, BundleReplicationProvider, ClassicBundleReplicationProvider, CascadingBundleReplicationProvider, RFSBundleReplicationProvider, RFSManager to DEBUG, but this didn't provide any insights. Any ideas about where we could search further?
This is the query that I am starting with: index=index sourcetype=logs StringA | stats count as A | appendcols [search index=index sourcetype=logs StringB | stats count as B] | eval percentage = ... See more...
This is the query that I am starting with: index=index sourcetype=logs StringA | stats count as A | appendcols [search index=index sourcetype=logs StringB | stats count as B] | eval percentage = (A / B) * 100 This works with no problems and returns the percentage as expected for the time period selected in the Splunk search. What I am trying to do is to produce a timechart that will graph the percentage over time. I now have 2 queries that each produce a timechart for each individual part of the equation: index=index sourcetype=logs StringA | timechart span=4h count by  StringA index=index sourcetype=logs StringB  | timechart span=4h count by StringB What I am attempting to do is to produce a timechart that is the percentage value? eval percentage = (StringA/StringB) * 100 but when I try to put the two above searches into a single query Splunk shows the results of the first eval ? index=index sourcetype=logs ("StringA" OR "StringB") | eval type=case(like(_raw, "%StringA%"), A, like(_raw, "%sStringB%"), B) | eval percentage=round((A / B)*100,1) | fields -A,B | timechart count by percentage span=4h
Hi all, Working on some cosmetic changes to a functional dashboard. In a standard dropdown, I have a field that populates a token. I fill this dropdown from a lookup table with the following colum... See more...
Hi all, Working on some cosmetic changes to a functional dashboard. In a standard dropdown, I have a field that populates a token. I fill this dropdown from a lookup table with the following columns: Institution existingName newName In this lookup, one item has a one to many relationship -- IE one newName value has 2 existingName values. To get around this, I changed that selection to be static, and modified the search to ignore it.  Static: Name     Value Z               "X" OR "Y"  Search:  | inputlookup lookup.csv  | where newName!="Z" Functionally, this produces the results I want. The dashboard uses the token properly from both the static option and the lookup table. However, the static option is at the top, with the rest sorted properly. This is a minor nitpick, but I'd like to sort the results of both the lookup table and the static option together.  Is this possible? Should I pass the "X" or "Y" into the lookup instead?
The query with 300 results displays only 50 when mvzip is used. How to display al 300 results ?
index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval message = case(like(msg,"%Auto Approved%"), "Auto Approved", like(msg,"%... See more...
index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval message = case(like(msg,"%Auto Approved%"), "Auto Approved", like(msg,"%Auto Rejected%"), "Auto Rejected",1=1,msg)|stats sum(Count) as Count by message | table message Count   I am having msg in event which contains Auto Approve or  Auto Rejected in between a big sentence I want to count auto approve and auto rejected events but it doesn't give the expected result.