All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking for a way to limit user searches to only the most recent 30 days, specifically for SmartStore purposes.  For example, if you have smartStore, you want to restrict who can search old dat... See more...
I am looking for a way to limit user searches to only the most recent 30 days, specifically for SmartStore purposes.  For example, if you have smartStore, you want to restrict who can search old data. Otherwise, some run-away users could cause your indexers to dump recent data.   Configuring Roles / Resources / "Role search time window limit" does not do this because it designates a window rather than the the most recent time. A user could still search "-30d to -60d", then "60d to -90d", etc, which results in dumping your recent data. Also, this method fails quietly (if a user searches 31 days, they have to click "jobs" to find out their search was limited - that's not good!). Configuring Roles / Restrictions / Adding "earliest = -30d" does not do this because it overwrites the UI completely (i.e. users can no longer search just the last 60 minutes).  The closest I have come to controlling this is adding Roles / Restrictions  "_index_earliest = -30d" but you can still trigger smartStore downloads.  I am asking this question believing there isn't currently an answer but I'd like to at least bring awareness. 
Hello! I am troubleshooting a report, and I've cut it all down to the very basics with the following two snippets. Basically, 'join' with a csv is not returning expected results. This dataset between... See more...
Hello! I am troubleshooting a report, and I've cut it all down to the very basics with the following two snippets. Basically, 'join' with a csv is not returning expected results. This dataset between Sept-1 and Sept-2 has about 75,000 unique entries (but the base search with "value=374667" only has about 30!).     index="xxx" sourcetype="xxx" value="374667" timeformat="%Y-%m-%d" earliest="2021-09-01" latest="2021-09-02" | join value [inputlookup lookup.csv] | dedup value | chart count     The above query returns 0 (incorrect).     index="xxx" sourcetype="xxx" value="374667" timeformat="%Y-%m-%d" earliest="2021-09-01" latest="2021-09-02" | dedup value | chart count     The above query returns 1 (expected).
Hello Splunkers, I am looking for an html page in a dashboard with ID, ID_Name, an other fields with Text box, dropdown and other inputs. Something like below/Attached example. I am planning to crea... See more...
Hello Splunkers, I am looking for an html page in a dashboard with ID, ID_Name, an other fields with Text box, dropdown and other inputs. Something like below/Attached example. I am planning to create a form where people can enter details and store everything into a kv store using outputlookup. Example: ID     ID_Name     Submit
When trying to start analytics agent as a machine agent extension for log analytics, the agent send an eception: localhost==> [system-thread-0] 08 Sep 2021 13:07:23,907 INFO InProcessLauncherTask - ... See more...
When trying to start analytics agent as a machine agent extension for log analytics, the agent send an eception: localhost==> [system-thread-0] 08 Sep 2021 13:07:23,907 INFO InProcessLauncherTask - Starting to execute actual task with parameters [{csvMethodArgs=/opt/appdynamics/machine-agent/monitors/analytics-agent/conf/analytics-agent.properties, methodName=main, className=com.appdynamics.analytics.agent.AnalyticsAgent}] localhost==> [system-thread-0] 08 Sep 2021 13:07:25,444 FATAL InProcessLauncherTask - Error occurred while attempting to load the task [{csvMethodArgs=/opt/appdynamics/machine-agent/monitors/analytics-agent/conf/analytics-agent.properties, methodName=main, className=com.appdynamics.analytics.agent.AnalyticsAgent}] java.lang.reflect.InvocationTargetException: null at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_292] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_292] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_292] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_292] at com.singularity.ee.agent.systemagent.task.util.InProcessLauncherTask.launch(InProcessLauncherTask.java:186) ~[machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.task.util.InProcessLauncherTask.execute(InProcessLauncherTask.java:150) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.task.analytics.AnalyticsAgentLauncher.execute(AnalyticsAgentLauncher.java:68) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.MonitorTaskRunner.runTask(MonitorTaskRunner.java:149) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.ScheduledTaskRunner.run(ScheduledTaskRunner.java:41) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.ManagedMonitorDelegate.setupEnvTask(ManagedMonitorDelegate.java:272) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.ManagedMonitorDelegate.initializeMonitor(ManagedMonitorDelegate.java:212) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.NodeMonitorManager.readConfig(NodeMonitorManager.java:178) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.NodeMonitorManager.startAllMonitors(NodeMonitorManager.java:265) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.NodeMonitorManager.<init>(NodeMonitorManager.java:79) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.components.monitormanager.AgentMonitorManager.<init>(AgentMonitorManager.java:63) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.Agent.setupMonitorManager(Agent.java:486) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.Agent.startServices(Agent.java:393) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.SystemAgent.startServices(SystemAgent.java:79) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.singularity.ee.agent.systemagent.Agent.start(Agent.java:378) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at com.appdynamics.agent.sim.legacy.DefaultLegacyAgentRegistrationStateManager$1.run(DefaultLegacyAgentRegistrationStateManager.java:80) [machineagent.jar:Machine Agent v21.7.0-3160 GA compatible with 4.4.1.0 Build Date 2021-07-23 23:04:28] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_292] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_292] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_292] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_292] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_292] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_292] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292] Caused by: com.appdynamics.common.framework.util.EmbeddedModeSecurityManager$SuppressedSystemExitException: System exit cannot be performed here as the application is running in embedded mode at com.appdynamics.common.framework.util.EmbeddedModeSecurityManager.checkExit(EmbeddedModeSecurityManager.java:65) ~[analytics-shared-common-21.7.0-2292.jar:?] at java.lang.Runtime.exit(Runtime.java:108) ~[?:1.8.0_292] at java.lang.System.exit(System.java:973) ~[?:1.8.0_292] at io.dropwizard.Application.onFatalError(Application.java:130) ~[dropwizard-core-2.0.17.jar:2.0.17] at io.dropwizard.Application.onFatalError(Application.java:117) ~[dropwizard-core-2.0.17.jar:2.0.17] at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_292] at io.dropwizard.Application.run(Application.java:94) ~[dropwizard-core-2.0.17.jar:2.0.17] at com.appdynamics.common.framework.AbstractApp.callRunServer(AbstractApp.java:270) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingFile(AbstractApp.java:264) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:251) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:171) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:150) ~[analytics-shared-framework-21.7.0-2292.jar:?] at com.appdynamics.analytics.agent.AnalyticsAgent.main(AnalyticsAgent.java:101) ~[analytics-agent.jar:?] ... 27 more Thanks in advance for your help....
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do thin... See more...
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do things, so bear with me. I am attempting to creat a report/search/dashboard that looks over the last four hours and will display the largest percent increase of a value. The field is BIN currently stored as a numerical value, I have tried the tostring command to transform it but usually ends up as no values being returned or them all being grouped together. But I digress, how would I first create a search/table view that would be updated along a described timeframe lets say every hour where it looks at the previous timeframe as a percentage of total records for that timeframe and calculates the percentage increase of the two timeframes and filters to see the top 20 increases? Example: I would want to ignore any decreasing values and possibly only see the top 20 that had increased that are greater than or equal to a 15% increase.   BIN Percent 2 hour ago Percent 1 hr ago Pecent change 123456 10% 12% 16.7% 234561 10% 8% -25% 345612 30% 25% -20% 456123 35% 30% -16.7% 561234 15% 25% 40%  
Hi,   I am trying to make use of this dashboard from this forum thread: Solved: Ever wonder which dashboards are being used and wh... - Splunk Community but I am running into an error saying "Une... See more...
Hi,   I am trying to make use of this dashboard from this forum thread: Solved: Ever wonder which dashboards are being used and wh... - Splunk Community but I am running into an error saying "Unexpected close tag" on this line:   <query>index="_internal" user!="-" sourcetype=splunkd_ui_access "en-US/app" | rex field=referer "en-US/app/(?<app>[^/]+)/(?<dashboard>[^?/\s]+)" | search dashboard!="job_management" dashboard!="dbinfo" dashboard!="*en-US" dashboard!="search" dashboard!="home"   Please advise.   Also I am adding this directly to the Source of a dashboard instead of a search is that right?
Having an issue that Splunk doesn't build my knowledge bundles. My setup: One indexer cluster and two standalone search heads (no SH cluster). Both search heads use indexer discovery and the setup us... See more...
Having an issue that Splunk doesn't build my knowledge bundles. My setup: One indexer cluster and two standalone search heads (no SH cluster). Both search heads use indexer discovery and the setup used to work fine. Until recently the knowledge bundle of one of the two search heads stopped getting updated on the indexers. I observe the following: All indexers always have an up to date knowledge bundle from the first search head in /opt/splunk/var/run/searchpeers, while the bundle from the second search head no longer gets updated and is outdated. When running "splunk show bundle-replication-config" on the two search heads, both show an identical config When running "splunk show bundle-replication-status", one search head shows a fully functional replication, while the other search head states "No knowledge bundle replication cycle status is available yet." The search head that shows the error with the replication cycle status has no local knowledge bundle in /opt/splunk/var/run/ (while the other search head indeed has it). Therefore I guess that there's not a problem on the channel between search head and indexer, but some interna on the search head is dysfunctional and no longer builds the bundles in the first place. I did all the usual checks (reboot, filesystem permissions, btool check, ...). On the broken search head, I moved all local apps out of SPLUNK_HOME/etc/apps and emptied SPLUNK_HOME/etc/users and restarted, but the knowledge bundle still wasn't getting build. In log.cfg on the SH I set DistributedBundleReplicationManager, BundleReplicationProvider, ClassicBundleReplicationProvider, CascadingBundleReplicationProvider, RFSBundleReplicationProvider, RFSManager to DEBUG, but this didn't provide any insights. Any ideas about where we could search further?
This is the query that I am starting with: index=index sourcetype=logs StringA | stats count as A | appendcols [search index=index sourcetype=logs StringB | stats count as B] | eval percentage = ... See more...
This is the query that I am starting with: index=index sourcetype=logs StringA | stats count as A | appendcols [search index=index sourcetype=logs StringB | stats count as B] | eval percentage = (A / B) * 100 This works with no problems and returns the percentage as expected for the time period selected in the Splunk search. What I am trying to do is to produce a timechart that will graph the percentage over time. I now have 2 queries that each produce a timechart for each individual part of the equation: index=index sourcetype=logs StringA | timechart span=4h count by  StringA index=index sourcetype=logs StringB  | timechart span=4h count by StringB What I am attempting to do is to produce a timechart that is the percentage value? eval percentage = (StringA/StringB) * 100 but when I try to put the two above searches into a single query Splunk shows the results of the first eval ? index=index sourcetype=logs ("StringA" OR "StringB") | eval type=case(like(_raw, "%StringA%"), A, like(_raw, "%sStringB%"), B) | eval percentage=round((A / B)*100,1) | fields -A,B | timechart count by percentage span=4h
Hi all, Working on some cosmetic changes to a functional dashboard. In a standard dropdown, I have a field that populates a token. I fill this dropdown from a lookup table with the following colum... See more...
Hi all, Working on some cosmetic changes to a functional dashboard. In a standard dropdown, I have a field that populates a token. I fill this dropdown from a lookup table with the following columns: Institution existingName newName In this lookup, one item has a one to many relationship -- IE one newName value has 2 existingName values. To get around this, I changed that selection to be static, and modified the search to ignore it.  Static: Name     Value Z               "X" OR "Y"  Search:  | inputlookup lookup.csv  | where newName!="Z" Functionally, this produces the results I want. The dashboard uses the token properly from both the static option and the lookup table. However, the static option is at the top, with the rest sorted properly. This is a minor nitpick, but I'd like to sort the results of both the lookup table and the static option together.  Is this possible? Should I pass the "X" or "Y" into the lookup instead?
The query with 300 results displays only 50 when mvzip is used. How to display al 300 results ?
index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval message = case(like(msg,"%Auto Approved%"), "Auto Approved", like(msg,"%... See more...
index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval message = case(like(msg,"%Auto Approved%"), "Auto Approved", like(msg,"%Auto Rejected%"), "Auto Rejected",1=1,msg)|stats sum(Count) as Count by message | table message Count   I am having msg in event which contains Auto Approve or  Auto Rejected in between a big sentence I want to count auto approve and auto rejected events but it doesn't give the expected result.    
How can  I integrate Trend micro apex one with Splunk Enterprise?
I would like to retrieve the data in /var/log as correctly as possible. Currently I am simply monitoring the entire /var/log folder with no pre-selected source type. On the List of pretrained sourc... See more...
I would like to retrieve the data in /var/log as correctly as possible. Currently I am simply monitoring the entire /var/log folder with no pre-selected source type. On the List of pretrained source types I see a few callouts for log files such as syslog but the majority of log files are not present in this list. Perhaps some of these types can be used elsewhere though? For example, I see the linux_messages_syslog pretrained type refers to logs in /var/log/messages and since syslog != messages I presume this type may be useful on other files as well?  So I can use the few pretrained source types and then do I need to make my own source types for all the other log files?  Is there any repository with user created source types? I have to imagine most log file types have had source types created for them by now? Or do people just not apply source types and simply search on the unstructured data?
Hi Team, I am trying to fetch the count and percentage of hosts having success and failures along with failure percentage. host = Server1 Server2 Server3 Server4 Server5 But if count of spc... See more...
Hi Team, I am trying to fetch the count and percentage of hosts having success and failures along with failure percentage. host = Server1 Server2 Server3 Server4 Server5 But if count of spcecific host is having no event, I want to show that as well even with result =0. I am running below query but it is missing some servers because there are no events on specific sevrers form last 2 hours. index=server_list host IN (Server1,Server2,Server3,Server4,Server5) events_status = "*" | eval pass=if(like(event_status,"20%"),1,0) | eval fail=if(!like(event_status,"20%"),1,0) | stats count as Overall_Volume,sum(pass) as Passed, sum(fail) as Failed by host | eval Failure_Rate=round(Failed_Requests/(Passed_Requests+Failed_Requests)*100,2) | fillnull value=0 Below result I am getting, because Server4 and Server5 dont have any traffic from last 2 Hours - host          Overall_Volume Passed Failed Failure_Rate Server1       2                              1                1            50 Server2      10                             6               4            40 Server3       1                               0               1            100   Can anyone help in query so that I should get all Servers with values as 0 if no traffic.      
Hello I have read the Splunk documentation regarding the subsearches https://docs.splunk.com/Documentation/Splunk/8.2.2/Search/Aboutsubsearches  There is 2 things I don't understand 1) Except if ... See more...
Hello I have read the Splunk documentation regarding the subsearches https://docs.splunk.com/Documentation/Splunk/8.2.2/Search/Aboutsubsearches  There is 2 things I don't understand 1) Except if I am mistaken but the subsearch below   sourcetype=syslog [search sourcetype=syslog earliest=-1h | top limit=1 host | fields + host]   provides the same result that if I dont use the subsearch   sourcetype=syslog earliest=-1h | top limit=1 host | fields + host   but the main difference is that if I use a subsearch I will just collect directly the good event while if I use the standard search, I am going to collect all the events with earliest = -1h and after I am going to display the related host with top limit=1 Is it correct? 2) The documentation says that the second reason to use a subsearh is to "Run a separate search and add the output to the first search using the append command" It means that we only can use the append command after brackets or is it also possible to use join, appendcols or appendpipe comand because I have already seen this! If it's possible what are the difference when we use append in a subsearch compared to join, appendcols or appendpipe? Thanks in advance
Good Morning, I have created a simple dashboard that returns information on specific errors.   The returned information are Time of event, Server, Sourcetype  and event.  I want to be able from the ... See more...
Good Morning, I have created a simple dashboard that returns information on specific errors.   The returned information are Time of event, Server, Sourcetype  and event.  I want to be able from the data returned, click on a value in the cell and it opens to that specific log information.  I feel like I am close but I am really new to Splunk. index=OH sourcetype=Buckeyes = Server1 source = /usr/local/ESP/Agent/spool/CDCDEV/MAIN/* "Exception text ORA00001: unique constraint" OR "fatal segments" OR "Maximum 91" OR "ORA01017: invalid username/password; logon denied" OR "Caused by: com.nf.nexus.batchx.workflow.WorkflowException: Error!!! job with name" OR "Unable to locate work flow implementation for es.link.batchx.nfs.control.processtask.monitor.recover.WorkflowImpl" OR "SEC049: Invalid Logon Attempt|Incorrect Password." dedup host, source |fields - _serial, _bkt, _cd, _indextime, _si, _subsecond, splunk_server, tag::host, _sourcetype |fields + _time, _raw, host, source | convert timeformat="%m-%d-%Y %l:%M:%S %p" ctime(_time) | eval _time=(substr(_time, 0, 11) + substr(_raw, 0, 8)) | eval source=substr(source,40) | rename _raw AS "Event", source AS "ESP Job", _time AS "Time", host AS "Server"  
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do thin... See more...
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do things, so bear with me. I am attempting to creat a report/search/dashboard that looks over the last four hours and will display the largest percent increase of a value. The field is BIN currently stored as a numerical value, I have tried the tostring command to transform it but usually ends up as no values being returned or them all being grouped together. But I digress, how would I first create a search/table view that would be updated along a described timeframe lets say every hour where it looks at the previous timeframe as a percentage of total records for that timeframe and calculates the percentage increase of the two timeframes and filters to see the top 20 increases? Example: I would want to ignore any decreasing values and possibly only see the top 20 that had increased that are greater than or equal to a 15% increase.   BIN Percent 2 hour ago Percent 1 hr ago Pecent change 123456 10% 12% 16.7% 234561 10% 8% -25% 345612 30% 25% -20% 456123 35% 30% -16.7% 561234 15% 25% 40%    
I've installed the otel collector for collecting Infrastructure monitoring data from Linux and Windows machines. I'm not able to get the values for IP address, MAC address, serial number for those m... See more...
I've installed the otel collector for collecting Infrastructure monitoring data from Linux and Windows machines. I'm not able to get the values for IP address, MAC address, serial number for those machines on the cloud. I checked by going on UI. I can see OS information but I also need either one of those three fields to be synced up on the cloud. Is there a way to sync up network data in infrastructure monitoring?
Hi, I wrote the following query to identify searches running in verbose mode but it seems to be inducing reports that have been deleted. Is there a field I can use to exclude them? I have had a look... See more...
Hi, I wrote the following query to identify searches running in verbose mode but it seems to be inducing reports that have been deleted. Is there a field I can use to exclude them? I have had a look but nothing obvious comes to me.   | rest /servicesNS/-/-/saved/searches | search alert_type = "always" NOT title="instrumentation.*" | table eai:acl.owner title description *mode* is_scheduled | search display.page.search.mode=verbose   Thanks
How do I add the two values from stats which I get from these query?