All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Timestamp extraction is done before transforms are processed. Consider setting props based on source rather than sourcetype. [source::object_rolemap_audit.csv] sourcetype = awss3:object_rolemap_aud... See more...
Timestamp extraction is done before transforms are processed. Consider setting props based on source rather than sourcetype. [source::object_rolemap_audit.csv] sourcetype = awss3:object_rolemap_audit [source::authz-audit.csv] sourcetype = awss3:authz_audit [aws:s3:csv] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , TRUNCATE = 20000 [awss3:object_rolemap_audit] TIME_FORMAT=%d %b %Y %H:%M:%S LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 [awss3:authz_audit] TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3Q FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1
"smartstore" and "AWS S3" are not the same thing.  SmartStore (S2) is a Splunk feature that separates storage from compute.  It relies on storage providers that follow the S3 standard, but does not h... See more...
"smartstore" and "AWS S3" are not the same thing.  SmartStore (S2) is a Splunk feature that separates storage from compute.  It relies on storage providers that follow the S3 standard, but does not have to use AWS.  AWS S3 is just one provider of on-line storage, but it is not suitable for storing live Splunk indexes. The reason S2 gets away with using AWS S3 for storing indexes is because it keeps a cache of indexed data local to the indexers.  It's this cache that serves search requests; any data not in the cache has to be fetched from AWS before it can be searched. Note that S2 stores ALL warm data.  There is no cold data with S2. All warm data remains where it is until it is rolled to cold or frozen.  There is no way for warm buckets to reside in one place for some period of time and some place else for another period of time. Freezing data either deletes it or moves it to an archive.  Splunk cannot search frozen data. To keep data local for 45 days and remote for 45 days would mean having a hot/warm period of 45 days and a cold period of 45 days.  Note that each period is measured based on the age of the newest event in the bucket rather than when the data moved to each respective storage tier. Data moves from warm to cold based on size rather than time so you would have to configure the buckets so they hold a day of data and then size the volume so it hold 45 days of buckets.  Configure the cold volume to be remote.  Splunk will move the buckets to the cold volume as the warm volume fills up.  Data will remain in cold (remote) storage until it expires (frozenTimePeriodInSecs=3110400 (90 days)).
@PickleRick There are multiple eventTyes in my logs. If i include all eventType then I am getting lot of results. Pls assist.
You're omitting the important part - the "other eventtype search".
Also, you don't have to escape slashes in the regex.
Regardless of splitting the event, there is no "merged" cells in Splunk. So you can't visualize it this way.
index=query_mcc | eval data=split(_raw," ") | mvexpand data | eval data = split(data, ",") | eval Date = strftime(_time, "%Y-%m-%d-%H:%M:%S") | eval Category = mvindex(data, 1) | eval Status = mvi... See more...
index=query_mcc | eval data=split(_raw," ") | mvexpand data | eval data = split(data, ",") | eval Date = strftime(_time, "%Y-%m-%d-%H:%M:%S") | eval Category = mvindex(data, 1) | eval Status = mvindex(data, -1) | eval Command = mvindex(data, 0) | table host, Date, Category, Status, Command
Calculations can be done with fields in the same event. index=test_index sourcetype="test_source" className=export | eval total = message.totalExportedProfileCounter + message.exportedRecords If t... See more...
Calculations can be done with fields in the same event. index=test_index sourcetype="test_source" className=export | eval total = message.totalExportedProfileCounter + message.exportedRecords If these fields do not have values in the same event, you need to use something like stats to correlate different events into the same event. For this you need a common field value between events to correlate them by.
Firstly, you don't need the / at the beginning and end of the regex string in the rex command, if anything these should be replace with double quotes. Secondly, you have an underscore instead of a hy... See more...
Firstly, you don't need the / at the beginning and end of the regex string in the rex command, if anything these should be replace with double quotes. Secondly, you have an underscore instead of a hyphen in your regex (assumed-role) which doesn't match with your sample data | rex field=responseElements.assumedRoleUser.arn "arn:aws:sts::(?<accountId>\d{12}):assumed-role\/(?<assumedRoled>.*)\/vault-oidc-(?<userId>\w+)-*."  
I have filled in the form Ryan
I have a below Splunk query which gives me the result. My SPL searches the " eventType IN (security.threat.detected, security.internal.threat.detected) " and provides me the result src_ip results. ... See more...
I have a below Splunk query which gives me the result. My SPL searches the " eventType IN (security.threat.detected, security.internal.threat.detected) " and provides me the result src_ip results. But the same src_ip field has multiple user_id results in other eventType.  I want my SPL to search the src_ip results with other eventType and filter if the user_id="*idp*". Example - If my src_ip=73.09.52.00, then the src_ip should search the other available eventType and filter the result if the user_id=*idp*   My Current SPL   index=appsrv_test sourcetype="OktaIM2:log" eventType IN (security.threat.detected, security.internal.threat.detected) | rex field=debugContext.debugData.url "\S+username\=(?<idp_accountname>\S+idp-references)" | search NOT idp_accountname IN (*idp-references*) | regex src_ip!="47.37.\d{1,3}.\d{1,3}" | rename actor.alternateId as user_id, target{}.displayName as user, client.device as dvc, client.userAgent.rawUserAgent as http_user_agent, client.geographicalContext.city as src_city client.geographicalContext.state as src_state client.geographicalContext.country as src_country, displayMessage as threat_description | strcat "Outcome Reason: " outcome.reason ", Outcome Result: " outcome.result details | stats values(src_ip) as src_ip count by _time signature threat_description eventType dvc src_city src_state src_country http_user_agent details | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `okta_threatinsight_threat_detected_filter`    
Set the default of your text input to "*" and extend your initial search like this index=<your index and other filters> [| makeresults | fields - _time | eval username="$your_text_searc... See more...
Set the default of your text input to "*" and extend your initial search like this index=<your index and other filters> [| makeresults | fields - _time | eval username="$your_text_search$" | where username!="*" | return username]
You can override the time picker by setting earliest and latest on your initial search line index=Activedirectory earliest=-24h latest=now()
Could someone explain to me how this cluster command works in the backend? I couldn't find any resource that explain the technique/algorithm behind this cluster command. How does it cluster the ma... See more...
Could someone explain to me how this cluster command works in the backend? I couldn't find any resource that explain the technique/algorithm behind this cluster command. How does it cluster the matches (termlist/termset/ngramset)? How is t be calculated? It doesn't seem to be probability based. What kind of clustering algorithm it uses? It would be the best if someone can explain the full algorithm for this cluster command. Much thanks
You could save the ip addresses to  lookup file and use inputlookup to retrieve the ipaddresses to use in your reports. Admittedly, this answer is vague as it depends on the nature of your searches ... See more...
You could save the ip addresses to  lookup file and use inputlookup to retrieve the ipaddresses to use in your reports. Admittedly, this answer is vague as it depends on the nature of your searches and what you are trying to achieve as output e.g. this method could be used if you want to combine the reports in to just 6 with all the ipaddresses in each report, although you could update the searches to take into account the extra dimension of ipaddress (depending on your actual searches).
Try something like this index=* handler=traffic <today timerange> OR <yesterday timerange> | eval day=if(_time < relative_time(now(),"@d"), "yesterday", "today") | timechart span=30m dc(dsid) as tra... See more...
Try something like this index=* handler=traffic <today timerange> OR <yesterday timerange> | eval day=if(_time < relative_time(now(),"@d"), "yesterday", "today") | timechart span=30m dc(dsid) as traffic by day | eval delta_traffic = today-yesterday
Let me describe a different scenario.  Suppose you do nothing, and use whichever time marker you already have (and I understand even less why you want to invent your own instead of using Splunk's bui... See more...
Let me describe a different scenario.  Suppose you do nothing, and use whichever time marker you already have (and I understand even less why you want to invent your own instead of using Splunk's built-in time selector - you can customize the selector any way you want, too).  Use it for the entire world.  Simply ask your users to set their preference to UTC in their own preferences.  Would that work? From user.prefs:[general] tz = <timezone> * Specifies the per-user timezone to use. * If unset, the timezone of the Splunk Server or Search Head is used. * Only canonical timezone names such as America/Los_Angeles should be used (for best results use the Splunk UI). * No default. Your users do not need to edit the file, of course.  The time zone can be set from Splunk Web.
| stats values(Group_Name) as Group_Name by admin user_name | eval Group_Name=mvjoin(Group_Name, ",") | stats values(user_name) as user_name by admin Group_Name | eval user_name=mvjoin(user_name,",")
Hi @DaClyde , I love that cartoon! good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giu... See more...
Hi @DaClyde , I love that cartoon! good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I have six different SPL queries that I run on a specific IP Address.  Is it possible to save a search as a report, schedule that report to run, and provide the report a seed file of IP addresses. ... See more...
I have six different SPL queries that I run on a specific IP Address.  Is it possible to save a search as a report, schedule that report to run, and provide the report a seed file of IP addresses. I have 20+ IP addresses that I need to run the same 6 reports on.  Right now i'm just running them interactively, which is a pain. Any suggestions?