All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i wanted to search for the websites/urls that people visited today and for a particular user.  i tried this but I didnt get any result.  any suggestion please?    index"*" sourcetype="WinEventl... See more...
i wanted to search for the websites/urls that people visited today and for a particular user.  i tried this but I didnt get any result.  any suggestion please?    index"*" sourcetype="WinEventlog:security" user="*"   Also, the login and and logout times on Active directory 
I'm trying to combine two simular values from the same field. and rename the values. I would like to combine  /v1/product and /v1/product/ and rename it Product API Search String: | stats ... See more...
I'm trying to combine two simular values from the same field. and rename the values. I would like to combine  /v1/product and /v1/product/ and rename it Product API Search String: | stats count by urlPthTxt I did try a few different  commands but didn't work. Please help. Thanks
I am making a test in python where I want to validate if an ITSI kpi works as expected. Lets say I have an index called alerts And I want the following data in the index, because it should trigge... See more...
I am making a test in python where I want to validate if an ITSI kpi works as expected. Lets say I have an index called alerts And I want the following data in the index, because it should trigger an alert {"alert":"true", "time":"1666702756"} I know there is a splunk event gen application but it feels to big to add a simple line, what is the most simple way to add an event to an index? For example is it possible to do with an api call? I tried looking around but could not find a good example, for something that feels very trivial.  Note: we can not use the splunk python sdk as we use a custom proxy/url and the python sdk does not support any custom urls. We are able to run queries with our own python script so if it is possible with a spl query that is fine to.    
I have seen several posts asking similar questions but I am not that much of a UI guy so they do not make sense. I have a Dashboard Studio, currently using a single value radial widget to display a... See more...
I have seen several posts asking similar questions but I am not that much of a UI guy so they do not make sense. I have a Dashboard Studio, currently using a single value radial widget to display a Yes or No based upon a query.  I can post the query if its helpful but dont think it matters as the query just returns the string 'Yes' or 'No' If the query result is Yes I want the widgets background Green, if No would like to display Red. I am not comitted to a single value radial its just what I was able to get working. Any suggestions on how to do this using a single value radial or a suggestion on a different widget to use and then how to change the background colors ? {     "type": "viz.singlevalueradial",     "title": "Non-Cycle Delivery Met",     "dataSources": {         "primary": "ds_3T2iIKSr"     },     "encoding": {},     "options": {         "backgroundColor": "#ffffff"     },     "showLastUpdated": true,     "context": {},     "showProgressBar": false }
I need a Splunk management app to monitor all splunk servers for out-of-date, out-of-sync (running different versions) apps. Does anyone know which I one I can use?
Made changes to search/metadata/local.meta that need deployed to search heads. search/local/app.conf contains [shclustering] deployer_push_mode = local_only When I stage/send from the shc deplo... See more...
Made changes to search/metadata/local.meta that need deployed to search heads. search/local/app.conf contains [shclustering] deployer_push_mode = local_only When I stage/send from the shc deployer,  after a rolling restart on the  sh captain search/metadata/local.meta remains the same as before the push. Does the SHC deploy metadata/local.meta? Splunk Enterprise 8.2.3 running on Redhat Linux. 8 Node Search Head Cluster. Permission changes made to  From: [] access = read : [ * ], write : [ admin, power ] export = system version = 8.2.3.2 modtime = 1666399466.315512000 To: [] access = read : [ admin, number_of_roles_here, user, user_ad_user ], write : [ admin, power ] owner = admin export = none version = 8.2.3.2 modtime = 1666381189.171483000
We recently moved our windows event log service up to Windows 2016 and Splunk 9.0.1 and all Security Auditing events are coming through with the message   Message=Splunk could not get the descripti... See more...
We recently moved our windows event log service up to Windows 2016 and Splunk 9.0.1 and all Security Auditing events are coming through with the message   Message=Splunk could not get the description for this event. Either the component that raises this event is not installed on your local computer or the installation is corrupt.   The Event data looks like this: the data is present but not the usual field descriptions that allows Splunk to work out the structure.  There are many posts, they all date from over 2 years ago, and all refer back to a master post from 2014, (https://community.splunk.com/t5/Getting-Data-In/quot-FormatMessage-error-quot-appears-in-indexed-message-for/m-p/139982#M28765 ) that doesn’t appear to be for current versions of Windows. I have however followed the broad advice in here: Checked the registry keys – they match the old server Started Splunk after the event log service (I tried stopping and starting Splunk on a running host to mimic). Confirmed that the event format is set to Events. HF is Splunk 9.0.1 / Windows 2016 version 1607 Build 14393.5427 / Splunk Cloud is Version:9.0.2208.3
I have a search that a user recently moved from every hour to every 10 minutes. Cron:    3-59/10 * * * *    The search takes ~2 minutes to run. The window is set to auto. BUT, I see the issue: ... See more...
I have a search that a user recently moved from every hour to every 10 minutes. Cron:    3-59/10 * * * *    The search takes ~2 minutes to run. The window is set to auto. BUT, I see the issue:   10-25-2022 06:13:00.633 +0000 INFO SavedSplunker - savedsearch_id="nobody;rcc; Pull: Pull Domain IOCs from MISP", search_type="scheduled", user="thatOneGuyCausingProblemsForMe", app="myApp", savedsearch_name="Pull: Pull Domain IOCs from MISP", priority=default, status=skipped, reason="The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_cluster-wide", concurrency_limit=1, scheduled_time=1666678380, window_time=-1, skipped_count=1, filtered_count=0     We have 4 very similar searches (similar schedule, duration, window, etc), all with the same error. The error fires off very consistently.   Splunk's complaint is that the given search is trying to run when another instance of the same search is running. But the searches only take ~2 minutes, and there is 10 minutes between them. I understand I can go into the limits.conf and change concurrency, but I do not see how these searches are overlapping themselves? I don't want to just hide the problem behind more CPUs
Hello All,  we have a default database:internal sourcetype for a application using DBConnect to send data to Splunk and I want to extract couple of fields from the incoming data from splunk side, ... See more...
Hello All,  we have a default database:internal sourcetype for a application using DBConnect to send data to Splunk and I want to extract couple of fields from the incoming data from splunk side, So, I was wondering If I use a custom sourcetype(database:internal:xxxx:zzzz), should I use(extract) the attributes from the data, "EXTRACT-xxxx = xxxx" in the props.conf or will the intended fields be extracted by default when we change to custom sourcetype? Any explanation would be really appreciated.  Thanks.   
Hello,      I have 10 servers with syslog generated. How do I ingest those syslog into the Splunk server. I have gone through the SC4S document. Do I have to install Splunk Connector for Syslog o... See more...
Hello,      I have 10 servers with syslog generated. How do I ingest those syslog into the Splunk server. I have gone through the SC4S document. Do I have to install Splunk Connector for Syslog on all 10 machines ? or Do we have any other best way to ingest the syslog ? Also can we use Secure syslog port 6514 ?
We are looking to provide the last part of a request uri that identifies a file name and has a client identifier variable in the middle of the uri. Sample uri request: GET /someportal/rest/product/... See more...
We are looking to provide the last part of a request uri that identifies a file name and has a client identifier variable in the middle of the uri. Sample uri request: GET /someportal/rest/product/v1_0/clientidentifier/filename/fnm_123456789abcd.png HTTP/1.1 The underlined text is the value that I need to extract.  Note the space after the .png. My current attempt is this: index=index source=/source sourcetype=sourcetype | rex field=_raw "GET /someportal/rest/product/v1_0/*/filename/(?<FileName>\d+)" Please let me know how far off I am.  Thanks
https://community.splunk.com/t5/Splunk-Search/Fields-vs-table-vs-nothing/m-p/498525#M194897 I was looking at a Splunk authored Search https://research.splunk.com/cloud/042a3d32-8318-4763-9679-09d... See more...
https://community.splunk.com/t5/Splunk-Search/Fields-vs-table-vs-nothing/m-p/498525#M194897 I was looking at a Splunk authored Search https://research.splunk.com/cloud/042a3d32-8318-4763-9679-09db2644a8f2/ which does exactly the table followed by stats. table in this case, seems totally unnecessary and due to the transformation would incur a performance cost. So, specifically in a clustered index environment, how does     | fields A B C | stats count by A B C     work from a data movement POV - clearly the fields will limit the return of fields from the indexers to the SH, but if there is no fields, does the stats run entirely on the SH, with (a) ALL raw data returned from the indexer, or (b) does the indexer only return the fields the stats command is going to use on the SH? If it is (a) then clearly a benefit in using fields before stats, but my expectations would be that it should work like (b).  
Hi, I just started to implement a Cluster of 3 nodes of events service with a load balancer, every node is started to be healthy and then unhealthy for two properties, and then go down [Elastics... See more...
Hi, I just started to implement a Cluster of 3 nodes of events service with a load balancer, every node is started to be healthy and then unhealthy for two properties, and then go down [Elasticsearch] unhealthy...retrying it the same error in every node and then go down  I tried to build this cluster on Windows Environment I tried to make a single node on the test environment also on windows, and it's running successfully.
I have a seemingly simple request: list the events and indicate if it occurred during an outage. I have been trying for ages and I cannot get it to work, can anyone please help? Base search for e... See more...
I have a seemingly simple request: list the events and indicate if it occurred during an outage. I have been trying for ages and I cannot get it to work, can anyone please help? Base search for events:  index=api_calls CSV lookup to record the outage windows, called 'outages.csv' (UK style dates): DateFrom DateTo Reason 01/09/2022 09:00:00  30/09/2022 23:00:00 Testing 1 01/10/2022 09:00:00 31/10/2022 09:00:00 Testing 2   This produces the correct outage row: | inputlookup outages.csv | eval t=now() | eval DateFromEpoch=strptime(DateFrom, "%d/%m/%Y %H:%M:%S") | eval DateToEpoch=strptime(DateTo, "%d/%m/%Y %H:%M:%S") | where DateFromEpoch <= t and DateToEpoch >= t | table Reason Output is: Testing 2 I would have expected this to add the Reason field to the base results: index=api_calls | append [ inputlookup outages.csv | eval t=_time | eval DateFromEpoch=strptime(DateFrom, "%d/%m/%Y %H:%M:%S") | eval DateToEpoch=strptime(DateTo, "%d/%m/%Y %H:%M:%S") | where DateFromEpoch <= t and DateToEpoch >= t | table Reason ] | table _time Reason * But for some reason I cannot get anything to add to the search, not even index=api_calls | append [ | makeresults   | eval Reason="hello" | table Reason ] | table _time Reason * Ideally, I would like this to be as a macro so I can re-use it easily: index=api_calls | `is_outage(_time)` | table _time Reason * I'm doing something wrong, any help appreciated.
Can I limit foreach iterations, or place a where clause (or other filter) in the foreach subsearch? I'm attempting to flatten a JSON field because I have multiple "roots" of the json, that host the... See more...
Can I limit foreach iterations, or place a where clause (or other filter) in the foreach subsearch? I'm attempting to flatten a JSON field because I have multiple "roots" of the json, that host the same fields I need access to.  For instance: json1.x.y json2.x.y json3.x.y and I want to work with all of the "y" fields at once by referencing them as "y".  I know a single "y" that will always exist, but the others are potentially dynamic, so I can't hardcode the json flattening with a rename.  Currently I'm running the below search, with the issue being "| foreach *.jsonConstant.*" iterates through all the json roots (1/2/3) and makes my results null if the correct root wasn't the last to run.   I'm unsure why it iterates through all json roots as the current event all columns related to other roots are null.       MYSEARCH ("json1.jsonConstant.knownName"=* OR "json2.jsonConstant.knownName"=* OR "json3.jsonConstant.knownName"=*) | eval jsonRoot=case(isnotnull('json1.jsonConstant.knownName'),"json1", isnotnull('json2.jsonConstant.knownName'),"json2", isnotnull('json3.jsonConstant.knownName'),"json3",1=1,0) | eval temp="" | foreach *.jsonConstant.* matchseg1=SEG1 matchseg2=SEG2 [ eval temp= temp . "|" . jsonRoot .":"."<<FIELD>>".":"."SEG1"."/"."SEG2" | eval SEG2 = '<<FIELD>>' ] | stats count by knownName       An example of the error I get would be: Event1: root for this event is json1, but the knownName is null because the foreach ran on json1, json2, and json3. And the most recent loop for json3 was null for all fields. Event 2: root for this event is json3, All fields extract/flatten correctly because json3 ran last. The temp field above is what I'm using to debug. I can't run a where clause within the foreach subsearch because it never runs any of the code in the foreach subsearch.  
Is there an XML equivalent to the Dashboard Studio clearDefaultOnSelection feature?  I'm not looking for a Java Script solution, just a straight..pure XML solution.  The 'default' token value in my c... See more...
Is there an XML equivalent to the Dashboard Studio clearDefaultOnSelection feature?  I'm not looking for a Java Script solution, just a straight..pure XML solution.  The 'default' token value in my case is '.*' Here is my attempt and it doesn't work..perhaps I have a syntax error? <change> <eval token="form.multi_token">if(match($form.multi_token$, "^\.\*") AND $form.multi_token$ != "^\.\*$", replace($form.multi_token$, "\.\*", ""), $form.multi_token$)</eval> </change>
How do I schedule a Cron alert or report to run every 2 weeks on a specific day.  I need it to run at end of day of every other Sunday.
How do we specify multiple output groups on a HEC token, like _TCP_ROUTING for monitor stanzas?
Hello all, we have a problem that our Splunk's Elastic Search Integrator addon is using a forbidenn character inside it's Splunk index due to connecting to a "frontend" cluster. Let me explain. The ... See more...
Hello all, we have a problem that our Splunk's Elastic Search Integrator addon is using a forbidenn character inside it's Splunk index due to connecting to a "frontend" cluster. Let me explain. The problem is that they have a “frontend” cluster which uses index patterns to search between clusters. This is also the cluster which’s endpoint is connected to our Elasticsearch Data Integrator app for Splunk. The “backend” cluster is the one containing our index. So the infrastructe is like this: Cluster Backend > Cluster Frontend > Splunk’s addon Backend’s Cluster index: security-audit-XXX Frontend’s cluster index pattern: *:security-audit-* As it is stated in Elastic's documentation here, the use of colon (:) inside index is forbidden. However we are using it and index pattern. Does anybody have any suggestions how to tackle this ?
So I've searched and searched but can't seem to find an answer to my issue. I need to add an all option to my dynamic dropdown. I have found answers that seem like they should be simple enough. Eithe... See more...
So I've searched and searched but can't seem to find an answer to my issue. I need to add an all option to my dynamic dropdown. I have found answers that seem like they should be simple enough. Either add All, * to static or alter the XML code. I've tried both, (I think when I altered the XML code it pretty much caused the dropdown to be the exact same way as it was had I just added the options to the static section) and each time I am getting a "search string cannot be empty" error. Don't know if it matters but I did watch a couple youtube videos, their search used | table fieldname | dedup fieldname at the end, when I did that I got the same issue, but now all the field values are grouped together, so I'm doing | stats count by fieldname at the end