All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm trying to let Splunk Enterprise log some creation of a user on the same system as where Splunk is installed. My Splunk-version is 9.3.1. Alongside with this install, I've installed the latest Un... See more...
I'm trying to let Splunk Enterprise log some creation of a user on the same system as where Splunk is installed. My Splunk-version is 9.3.1. Alongside with this install, I've installed the latest Universal Forwarder (win) (on localhost 127.0.0.1). When installing: - I skip the SSL page - click "Next" - select "Local System" - click "Next" - check all items under "Windows Log Events" - click "Next" - generate an admin account and password - leave the "Deployment Server"-settings empty - enter "127.0.0.1:9997" as Host and port for "Receiving Indexer" - finish the installer Then I create a user (net user /add <user>) in CMD. After this step I return to Splunk Search and enter * as search criteria but nothing is found. Even when I enter the username (I added) the software finds nothing. Can someone tell me what I'm doing wrong or what the issue can be? Thanks! Gerd
@PickleRick Thanks for suggestion.  I made the following changes in transforms.conf: 1) For [new_sourcetype] - Removed SOURCE_KEY = source 2) For [route_to_teamid_index] - Updated the regex -... See more...
@PickleRick Thanks for suggestion.  I made the following changes in transforms.conf: 1) For [new_sourcetype] - Removed SOURCE_KEY = source 2) For [route_to_teamid_index] - Updated the regex - Set WRITE_META = true After these changes, the sourcetype value successfully changed to "aws:kinesis:starflow", but the data did not route to the specified index. Instead, it went to the default index. current configs: ----------------------------------------------------------------------------- props ----------------------------------------------------------------------------- #custom-props-for-starflow-logs [source::.../starflow-app-logs...] TRANSFORMS-set_new_sourcetype = new_sourcetype TRANSFORMS-set_route_to_teamid_index = route_to_teamid_index ----------------------------------------------------------------------------- transforms ----------------------------------------------------------------------------- #custom-transforms-for-starflow-logs [new_sourcetype] REGEX = .* DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = .*\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = source FORMAT = index::$1 DEST_KEY = _MetaData:Index WRITE_META = true I'm confident that both my props.conf and [new_sourcetype] stanza in transforms.conf are functioning correctly. The only issue seems to be with [route_to_teamid_index].
I've the below event, where I need to display only event which has action=test and category=testdata. test { line1: 1 "action": "test", "category": "testdata", } test1 { line1: 1 "action": ... See more...
I've the below event, where I need to display only event which has action=test and category=testdata. test { line1: 1 "action": "test", "category": "testdata", } test1 { line1: 1 "action": "event", "category": "testdata", } test2 { line1: 1 "action": "test", "category": "duplicate_data", }
I've an event looks like below. The Key value pair of "test:22345" exists more than 1 time and I need only this events in the output as table or count of events. {"test":"22345","testType":"mod... See more...
I've an event looks like below. The Key value pair of "test:22345" exists more than 1 time and I need only this events in the output as table or count of events. {"test":"22345","testType":"model"},{"test":"22345","testType":"model1"},{"test":"22345","testType":"model2"}
Wow this is exactly what I wanted!  I spent hours trying to figure this out.  Thanks again for the clear instructions
First things first. 1. Just for the sake of completness of the info - the logs are ingested by inputs on this HF? Not forwarded from remote? 2. To debug one thing at a time I'd start with something... See more...
First things first. 1. Just for the sake of completness of the info - the logs are ingested by inputs on this HF? Not forwarded from remote? 2. To debug one thing at a time I'd start with something foolproof like a simple SEDCMD adding a single letter to an event transform and attach it to a source. This way you're not wondering whether the props part is wrong or the transform itself. When you make sure the props entry is OK because your transform is actually getting called, get to debug your index overwriting.
I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is commu... See more...
I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is communicating and sending data but when I log into the web UI the forwarder is not listed   does anybody know what this might be?  the configs on all forwrders is the same.
Try something like this |stats latest(result), latest(eval(if(result="failed",_time,null()))) as last_failed, latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRu... See more...
Try something like this |stats latest(result), latest(eval(if(result="failed",_time,null()))) as last_failed, latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRuntime by test
It looks like you have your fields in the wrong order in the mvmap and possible your filter logic has been inverted. Try something like this | makeresults count=10 | eval string=mvindex(split("abc",... See more...
It looks like you have your fields in the wrong order in the mvmap and possible your filter logic has been inverted. Try something like this | makeresults count=10 | eval string=mvindex(split("abc",""),random()%3).mvindex(split("abc",""),random()%3).mvindex(split("abc",""),random()%3) | eval regex="abc|cba" | eval regex=split(regex,"|") | eval true=mvmap(regex,if(match(string,regex),regex,0)) | eval true=mvfilter(NOT true=0) | where isnotnull(true) If the random string is abc or cba you will get a result, if not, you won't
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This ta... See more...
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This table also contains a column that shows the last time a test ran (pass or fail). Here's a picture. Here's my current search: index="redacted" | rex field=runtime "^(?<seconds>\w*.\w*)" |stats latest(result), latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRuntime by test | eval averageRuntime=round(averageRuntime,0) | strcat averageRuntime f2 " seconds." field3 averageRuntime | `timesince(last_checked,last_checked)`   Any ideas or tips are greatly appreciated. Thanks in Advance.
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, a... See more...
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, and so far I have come up with this:     index="x" source="y" EndtoEnd | rex (?<e2e_p>\d+)ms \\Extracts the numerical value from the e2e_p field. | where isnotnull(e2e_p) | streamstats avg(e2e_p) window=1800 current=t time_window=30m as avg_e2e_p | where avg_e2e_p > 500     The condition doesn't happen often, but I'll work with the team that supports the app to simulate the condition once the query is finalized. I have never used streamstats before, but that's what has come up in my search for a means to incoporate a sliding window into a SPL query. Thank you in advance for taking the time to help with this.
Hi @Real_captain , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @PickleRick still it is not working.
@LowAnt- Splunk does not assign any unique ID to each event so your best bet is as following: Specify index, sourcetype Specify earliest and latest (both same time as your event time) Specify som... See more...
@LowAnt- Splunk does not assign any unique ID to each event so your best bet is as following: Specify index, sourcetype Specify earliest and latest (both same time as your event time) Specify some unique keyword to that event to avoid duplicate events if those events has exact same time. Example: index=<your_index> sourcetype=<your_sourcetype> earliest="10/5/2024:20:00:00" latest="10/5/2024:20:00:00" "this-keyword-specific-to-this-event" Now once you run this search Splunk will generate a URL, that URL you should be able to use it.   I hope this helps!!! Kindly upvote if it does!!!
Before we know it, this post is going to be able to vote
Happy 13th Birthday, posting #107735! Why, with everything going on, I missed your birthday again -- but you've changed your name! I know you're now "admin-security message-id 3439", and I'll try to r... See more...
Happy 13th Birthday, posting #107735! Why, with everything going on, I missed your birthday again -- but you've changed your name! I know you're now "admin-security message-id 3439", and I'll try to remember, but if you want to go back to being #107735, I support you there as well. You be you! Wow. 13 years. I know we said 12 would be your year to leverage technology so well established that it's gone from 'new' to 'common' to 'unmentioned' to 'forgotten' to 'new again', but it's never too late to get on-board the 'trivially easy to configure and use' train. My, but it's easy. So much of this trail has been blazed before you by so, so many others - even in your own field - that it's almost a slam-dunk. Let's get you back on your feet, #10773--uh, 3439, and get you climbing up that hill back to the mainstream. Don't worry at the bright lights of the projects whizzing by on that mainstream. Your parents will be worried sick at where we found you, and they miss you and they just want to see you succeed. This is your year, 3439. Party with proper authentication and easy authorization like it's 1999 !
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:  ... See more...
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:        | eval regex=split(regex, "|") | mvexpand regex | where match(string, regex)         The regex field contains 80+ different regex codes to match on certain key words. The mvexpand would cause one event to be split up into 80+ different events, just to potentially match on one field. Due to the use of this mvexpand, we encountered mvexpand's memory limitations causing events to be dropped.    I'm seeing if it is possible to match the regex within the "regex" field to the string field without having to use mvexpand to break it apart.  Previously did not work, recommended solutions such as:         | eval true = mvmap(regex, if(match(regex, query),regex,"0")) | eval true = mvfilter(true="0") | where ISNOTNULL(true)        
Try to look out for  the match below. "rolling restart finished" " "My GUID is"
See my other comment. You will need another input method. Suggest you google Azure functions "unzip" and see if they can just use Azure to do that. Otherwise you would need custom code or scripted in... See more...
See my other comment. You will need another input method. Suggest you google Azure functions "unzip" and see if they can just use Azure to do that. Otherwise you would need custom code or scripted input to pull in the zip and pass to something like the `unarchive_cmd`   unarchive_cmd = <string> * Only called if invalid_cause is set to "archive". * This field is only valid on [source::<source>] stanzas. * <string> specifies the shell command to run to extract an archived source. * Must be a shell command that takes input on stdin and produces output on stdout. * Use _auto for Splunk software's automatic handling of archive files (tar, tar.gz, tgz, tbz, tbz2, zip) * This setting applies at input time, when data is first read by Splunk software, such as on a forwarder that has configured inputs acquiring the data. * Default: empty string   Azure functions is likely the more scalable/flexible option, but if this is not a large amount of data, you might be able to hack together HF(s) to do this.  Please, accept my original comment as solution to your post and review the options I mentioned in my comment. Also be sure to check out internal azure sme channels to learn more or holler at Pro Serv.