All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, Want to mask two of the fields "password" and "cpassword" from the events which are getting written with the plain text information. So needs to be changed as #####. Sample event informati... See more...
Hi Team, Want to mask two of the fields "password" and "cpassword" from the events which are getting written with the plain text information. So needs to be changed as #####. Sample event information:   [2024-01-31_07:58:28] INFO : REQUEST: User:abc CreateUser POST: name: AB_Test_Max;email: xyz@gmail.com;password: abc12345679;cpassword: abc12345679;role: User; [2024-01-30_14:05:42] INFO : REQUEST: User:xyz CreateUser POST: name: Math_Lab;email: abc@yahoo.com;password: xyzab54;cpassword: xyzab54;role: Admin; So kindly help with the props.conf so that i can apply with SEDCMD-mask.
Hi All,   Just wanted to know we have splunk ES and we use servicenow to triggered alert now my question is if there are few alert which I want to give Priority as P2 how can I do it in splunk as s... See more...
Hi All,   Just wanted to know we have splunk ES and we use servicenow to triggered alert now my question is if there are few alert which I want to give Priority as P2 how can I do it in splunk as splunk by default priority is P3.   
When you say "in another application" what do you mean The predict command can be used to predict future trends https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Predict  ... See more...
When you say "in another application" what do you mean The predict command can be used to predict future trends https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Predict    
Try this - use your index and I assume that the event _time stamp is the login time. index=bla userID=text123 earliest=-5m@m latest=@m | stats dc(ip) as ips by userID | where ips>1 If your events c... See more...
Try this - use your index and I assume that the event _time stamp is the login time. index=bla userID=text123 earliest=-5m@m latest=@m | stats dc(ip) as ips by userID | where ips>1 If your events contain other info than just login details, then you may need to add login_time=* to the search
Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=1... See more...
Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=12.3.3.45 userID=text123, login_time="2024-03-21 08:02:12.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 07:02:42.201000", ip_addr=12.3.3.34   i want get data, userID="text123 " AND in the last 5 minutes AND if mutiple ip i used join,map,append but not solved.please help for SPL this
HI  Could you please share some image for the below request
Hello How to set time range from dropdown in Dashboard Studio? For example:    Drop down:    (Kindergarten, Elementary, Middle School, High School) If select "Kindergarten" ==>   time range  ... See more...
Hello How to set time range from dropdown in Dashboard Studio? For example:    Drop down:    (Kindergarten, Elementary, Middle School, High School) If select "Kindergarten" ==>   time range  =>   Last 24 hour If select "Elementary"    ==>   time range  =>   Last 3 day If select "Middle School" ==>   time range  =>   Last 7 day If select "High School" ==>   time range  =>   Last 30 day Thank you so much
Hi All  Anyone can confirm is there any prebuilt dashboard available for SAP - customer data cloud.  If no pre-built dashboard, then how would i like to pull all the logs from the SAP  system to mo... See more...
Hi All  Anyone can confirm is there any prebuilt dashboard available for SAP - customer data cloud.  If no pre-built dashboard, then how would i like to pull all the logs from the SAP  system to monitor infrastructure metrics in splunk dashboard    Note - Currently i have enabled all the logs are sent via connector, however i cant see end point logs    
@marnall @Not really, it’s like if I’m running the search for last 24 hrs, I’d like to see the data for now()+1d. 
Find out what the current time is then compare to you window times: | eval timeNow = strftime(now(), "%H%M") | where (timeNow < 2350 AND timeNow > 0015) ```Outside of main. window```  
There is no one API that will return all of the KOs owned by a given user.  You will have to combine multiple API results to get the full list. | rest /services/saved/searches ``` Searches, reports,... See more...
There is no one API that will return all of the KOs owned by a given user.  You will have to combine multiple API results to get the full list. | rest /services/saved/searches ``` Searches, reports, alerts ``` | rest /services/data/ui/views ``` Dashboards ``` | rest /services/data/macros ``` Macros ``` | rest /services/data/lookup-table-files ``` Lookup files ``` | rest /services/saved/eventtypes ``` Eventtypes ``` Those are some of the more common ones.  See other available API endpoints using | rest /services/data or | rest /services/saved
Yes,I noticed that as well. I see the event count before the eventstats removes the fields that are over my 'where count' statement limit. I'm searching back 15 minutes and only have a few hundred ev... See more...
Yes,I noticed that as well. I see the event count before the eventstats removes the fields that are over my 'where count' statement limit. I'm searching back 15 minutes and only have a few hundred events based on my geolocation and other criteria before the eventstats. But a few hundred is too many for a single person to weed through, looking for legit user activity when there are a few hundred non-legit user events. Thanks for the information.
Either the data in the summary index is incorrect or it's being used incorrectly.  Was the data written using an si* (sistats, sichart, sitimechart, etc.) command?  If so, then it must be read using ... See more...
Either the data in the summary index is incorrect or it's being used incorrectly.  Was the data written using an si* (sistats, sichart, sitimechart, etc.) command?  If so, then it must be read using the same query and the non-si (stats, chart, timechart, etc.) command. Tell more about the two queries and we may be able to be more specific.
So, mvexpand may work, but it depends on how you got into this position to begin with.  What's the query?
As I was going through the Asset and Identity Management manual, I couldn't see anything related to how to enrich the two lookup files assets_by_cidr.csv and assets_by_str.csv. For some reason (I cou... See more...
As I was going through the Asset and Identity Management manual, I couldn't see anything related to how to enrich the two lookup files assets_by_cidr.csv and assets_by_str.csv. For some reason (I couldn't figure out why), the assets_by_str.csv is filled with data and is populating data when running any search. However, nothing is getting fetched to assets_by_cidr.csv, I'm not sure if this is supposed to be filled automatically? and I can't find any configuration that associates where these two CSVs are taking the data from...    I can only see that they're coming from the app SA-IdentityManagement, can someone please help in troubleshooting this? Where are these two lookup table expected to get the data from and how? Lastly, to give more context, the final purpose it to fulfill the request of data enrichment for this specific use case Detect Large Outbound ICMP Packets...
Well - I always have problem with clear explanation, sorry about it. So look at the graph below It is exactly  what I need . One "series" - bars is a count for each uniqe value >> timechart cou... See more...
Well - I always have problem with clear explanation, sorry about it. So look at the graph below It is exactly  what I need . One "series" - bars is a count for each uniqe value >> timechart count by kmethod Second series , black line, just a simple sum or average function >> timechart sum(kmethod)      
It is not clear what you are trying to do here - the second one generates a count for each unique value of kmethod - which presumably is a number since the first one is summing these? Please can you... See more...
It is not clear what you are trying to do here - the second one generates a count for each unique value of kmethod - which presumably is a number since the first one is summing these? Please can you clarify what you are trying to do, perhaps provide some sample (anonymosed) events so we can see what you are dealing with, and an example of your expected result?
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to le... See more...
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to let one of my indexes use the smart store, when I restart splunk it basically hangs on this step: Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes...  Nothing found in logs, I am just puzzled how to fix this. Can anybody hint what can be the issue? indexes.conf: [volume:s3volumeone] storageType = remote path = s3://some-bucket-name remote.s3.endpoint = https://s3.us-west-2.amazonaws.com [smart_store_index_10] remotePath = volume:s3volumeone/$_index_name homePath = $SPLUNK_DB/$_index_name/db coldPath = $SPLUNK_DB/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb maxGlobalDataSizeMB = 0 maxGlobalRawDataSizeMB = 0 homePath.maxDataSizeMB = 1000 maxHotBuckets = 2 maxDataSize = 3 maxWarmDBCount = 5 frozenTimePeriodInSecs = 10800 small numbers for bucket size etc. are intentional to allow quick testing of settings.
Do you have the date_* fields in your data? If so, you can do this search... earliest=-1mon (date_hour>=$start_hour_token$ date_minute>=$start_minute_token$) (date_hour<$end_hour_token$ OR (date_ho... See more...
Do you have the date_* fields in your data? If so, you can do this search... earliest=-1mon (date_hour>=$start_hour_token$ date_minute>=$start_minute_token$) (date_hour<$end_hour_token$ OR (date_hour=$end_hour_token$ date_minute<$end_minute_token$))) If you don't have those fields extracted, then you will have to do an eval statement to create the date_hour and date_minute fields and then do a where clause to do the same comparison as above.
It's worth noting, however, as @bowesmana pointed out, that eventstats is a relatively "heavy" command because it needs to generate whole result set and gather it on a search-head in order to create ... See more...
It's worth noting, however, as @bowesmana pointed out, that eventstats is a relatively "heavy" command because it needs to generate whole result set and gather it on a search-head in order to create the statistics which it later adds to results. With a small data set you can get away with just calling eventstats and processing the results further. If your initial result set is big you might indeed want to limit set of processed fields (including removing _raw if it's no longer needed).