All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=1... See more...
Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=12.3.3.45 userID=text123, login_time="2024-03-21 08:02:12.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 07:02:42.201000", ip_addr=12.3.3.34   i want get data, userID="text123 " AND in the last 5 minutes AND if mutiple ip i used join,map,append but not solved.please help for SPL this
HI  Could you please share some image for the below request
Hello How to set time range from dropdown in Dashboard Studio? For example:    Drop down:    (Kindergarten, Elementary, Middle School, High School) If select "Kindergarten" ==>   time range  ... See more...
Hello How to set time range from dropdown in Dashboard Studio? For example:    Drop down:    (Kindergarten, Elementary, Middle School, High School) If select "Kindergarten" ==>   time range  =>   Last 24 hour If select "Elementary"    ==>   time range  =>   Last 3 day If select "Middle School" ==>   time range  =>   Last 7 day If select "High School" ==>   time range  =>   Last 30 day Thank you so much
Hi All  Anyone can confirm is there any prebuilt dashboard available for SAP - customer data cloud.  If no pre-built dashboard, then how would i like to pull all the logs from the SAP  system to mo... See more...
Hi All  Anyone can confirm is there any prebuilt dashboard available for SAP - customer data cloud.  If no pre-built dashboard, then how would i like to pull all the logs from the SAP  system to monitor infrastructure metrics in splunk dashboard    Note - Currently i have enabled all the logs are sent via connector, however i cant see end point logs    
@marnall @Not really, it’s like if I’m running the search for last 24 hrs, I’d like to see the data for now()+1d. 
Find out what the current time is then compare to you window times: | eval timeNow = strftime(now(), "%H%M") | where (timeNow < 2350 AND timeNow > 0015) ```Outside of main. window```  
There is no one API that will return all of the KOs owned by a given user.  You will have to combine multiple API results to get the full list. | rest /services/saved/searches ``` Searches, reports,... See more...
There is no one API that will return all of the KOs owned by a given user.  You will have to combine multiple API results to get the full list. | rest /services/saved/searches ``` Searches, reports, alerts ``` | rest /services/data/ui/views ``` Dashboards ``` | rest /services/data/macros ``` Macros ``` | rest /services/data/lookup-table-files ``` Lookup files ``` | rest /services/saved/eventtypes ``` Eventtypes ``` Those are some of the more common ones.  See other available API endpoints using | rest /services/data or | rest /services/saved
Yes,I noticed that as well. I see the event count before the eventstats removes the fields that are over my 'where count' statement limit. I'm searching back 15 minutes and only have a few hundred ev... See more...
Yes,I noticed that as well. I see the event count before the eventstats removes the fields that are over my 'where count' statement limit. I'm searching back 15 minutes and only have a few hundred events based on my geolocation and other criteria before the eventstats. But a few hundred is too many for a single person to weed through, looking for legit user activity when there are a few hundred non-legit user events. Thanks for the information.
Either the data in the summary index is incorrect or it's being used incorrectly.  Was the data written using an si* (sistats, sichart, sitimechart, etc.) command?  If so, then it must be read using ... See more...
Either the data in the summary index is incorrect or it's being used incorrectly.  Was the data written using an si* (sistats, sichart, sitimechart, etc.) command?  If so, then it must be read using the same query and the non-si (stats, chart, timechart, etc.) command. Tell more about the two queries and we may be able to be more specific.
So, mvexpand may work, but it depends on how you got into this position to begin with.  What's the query?
As I was going through the Asset and Identity Management manual, I couldn't see anything related to how to enrich the two lookup files assets_by_cidr.csv and assets_by_str.csv. For some reason (I cou... See more...
As I was going through the Asset and Identity Management manual, I couldn't see anything related to how to enrich the two lookup files assets_by_cidr.csv and assets_by_str.csv. For some reason (I couldn't figure out why), the assets_by_str.csv is filled with data and is populating data when running any search. However, nothing is getting fetched to assets_by_cidr.csv, I'm not sure if this is supposed to be filled automatically? and I can't find any configuration that associates where these two CSVs are taking the data from...    I can only see that they're coming from the app SA-IdentityManagement, can someone please help in troubleshooting this? Where are these two lookup table expected to get the data from and how? Lastly, to give more context, the final purpose it to fulfill the request of data enrichment for this specific use case Detect Large Outbound ICMP Packets...
Well - I always have problem with clear explanation, sorry about it. So look at the graph below It is exactly  what I need . One "series" - bars is a count for each uniqe value >> timechart cou... See more...
Well - I always have problem with clear explanation, sorry about it. So look at the graph below It is exactly  what I need . One "series" - bars is a count for each uniqe value >> timechart count by kmethod Second series , black line, just a simple sum or average function >> timechart sum(kmethod)      
It is not clear what you are trying to do here - the second one generates a count for each unique value of kmethod - which presumably is a number since the first one is summing these? Please can you... See more...
It is not clear what you are trying to do here - the second one generates a count for each unique value of kmethod - which presumably is a number since the first one is summing these? Please can you clarify what you are trying to do, perhaps provide some sample (anonymosed) events so we can see what you are dealing with, and an example of your expected result?
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to le... See more...
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to let one of my indexes use the smart store, when I restart splunk it basically hangs on this step: Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes...  Nothing found in logs, I am just puzzled how to fix this. Can anybody hint what can be the issue? indexes.conf: [volume:s3volumeone] storageType = remote path = s3://some-bucket-name remote.s3.endpoint = https://s3.us-west-2.amazonaws.com [smart_store_index_10] remotePath = volume:s3volumeone/$_index_name homePath = $SPLUNK_DB/$_index_name/db coldPath = $SPLUNK_DB/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb maxGlobalDataSizeMB = 0 maxGlobalRawDataSizeMB = 0 homePath.maxDataSizeMB = 1000 maxHotBuckets = 2 maxDataSize = 3 maxWarmDBCount = 5 frozenTimePeriodInSecs = 10800 small numbers for bucket size etc. are intentional to allow quick testing of settings.
Do you have the date_* fields in your data? If so, you can do this search... earliest=-1mon (date_hour>=$start_hour_token$ date_minute>=$start_minute_token$) (date_hour<$end_hour_token$ OR (date_ho... See more...
Do you have the date_* fields in your data? If so, you can do this search... earliest=-1mon (date_hour>=$start_hour_token$ date_minute>=$start_minute_token$) (date_hour<$end_hour_token$ OR (date_hour=$end_hour_token$ date_minute<$end_minute_token$))) If you don't have those fields extracted, then you will have to do an eval statement to create the date_hour and date_minute fields and then do a where clause to do the same comparison as above.
It's worth noting, however, as @bowesmana pointed out, that eventstats is a relatively "heavy" command because it needs to generate whole result set and gather it on a search-head in order to create ... See more...
It's worth noting, however, as @bowesmana pointed out, that eventstats is a relatively "heavy" command because it needs to generate whole result set and gather it on a search-head in order to create the statistics which it later adds to results. With a small data set you can get away with just calling eventstats and processing the results further. If your initial result set is big you might indeed want to limit set of processed fields (including removing _raw if it's no longer needed).
At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmet... See more...
At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmethod   generates just one series .   I would like to join both timecharts and kind of merge "count by" with simple "avg" or "sum" so  : -first one 'stacked bar' from second example -second one 'line' from second series of the first example   Any hints ?   K.  
As @ITWhisperer has said, subsearches run first, so you can't pass time from the outer to the subsearch, but the addinfo command is generally a way to know what time range you have for your search, s... See more...
As @ITWhisperer has said, subsearches run first, so you can't pass time from the outer to the subsearch, but the addinfo command is generally a way to know what time range you have for your search, so in the subsearch you can do something like this to remove the entries from the lookup that do NOT fit in the time range of the search [ | inputlookup lookup.csv | addinfo | eval first_time=strftime(info_min_time, "%H") | eval last_time=strftime(info_max_time, "%H") | rex field=Time "[^ ]* (?<hour>\d+)" | where hour>=first_time AND hour<=last_time ] This is taking the HOUR part of your lookup Time value and comparing that to the search time range and only retaining the lookup entries that match the time range of your search, so when you combine the entries after this subsearch, only those from the lookup that are relevant to the range are collected with the real time data.
the INDEXED_EXTRACTIONS configuration belongs in props.conf of the universal forwarder.   |tstats count where index=* sourcetype=my_json_data by host | stats values(host) The search above sh... See more...
the INDEXED_EXTRACTIONS configuration belongs in props.conf of the universal forwarder.   |tstats count where index=* sourcetype=my_json_data by host | stats values(host) The search above should tell you which hosts need to be looked at where you would remove INDEXED_EXTRACTIONS = json from the SHs and Indexers and move this configuration (INDEXED_EXTRACTIONS = json) to the forwarders props.conf. Make sure the forwarder inputs.conf for the json source you are ingesting is tagging the data with the appropriate sourcetype, then in props.conf reference that sourcetype stanza for your config: ie (UF): inputs.conf [monitor:///file] sourcetype=foo_json index=bar props.conf [foo_json] INDEXED_EXTRACTIONS = json     see:https://docs.splunk.com/Documentation/Splunk/6.5.2/Admin/Configurationparametersandt[…]A.&_ga=2.147263155.568450395.1710801981-1206481253.1693859797 INDEXED_EXTRACTIONS are unique in that they happen in the structured parsing queue of the universal forwarder where usually parsing happens at a HF or indexer if there is no HF. if you use a HF as the first point of ingest and no UF then you place it there on the HF. see: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Extractfieldsfromfileswithstructureddata If you have Splunk Cloud Platform and want configure the extraction of fields from structured data, use the Splunk universal forwarder.  
In Email alerts, there is a checkbox for "Inline", which would put the search results table into the body of the email. If you would like more control over it, you could do some SPL magic to make a ... See more...
In Email alerts, there is a checkbox for "Inline", which would put the search results table into the body of the email. If you would like more control over it, you could do some SPL magic to make a single field containing the html for a table in the arrangement you want, then put that field in the body.