All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ooh - that isn't necessary? Sorry, I'm new to Splunk. I was watching some tutorial on Udemy regarding Splunk and was following the guy who did the demo. After installing Splunk Enterprise, he star... See more...
Ooh - that isn't necessary? Sorry, I'm new to Splunk. I was watching some tutorial on Udemy regarding Splunk and was following the guy who did the demo. After installing Splunk Enterprise, he started talking about the "universal forwarder" and how to install it. I thought it was part of the whole... So it wasn't required?  
If this is a verbatim copy of your original event you have much more problems with your data.
First and foremost - why are you installing a UF when you already have a full Splunk instance? Just add input(s) there.
Never mind... I've stopped the universal forwarder-software, waited some second and restarted the forwarder. After this restart I performed a search (*)  and it immediately gave me some results. I... See more...
Never mind... I've stopped the universal forwarder-software, waited some second and restarted the forwarder. After this restart I performed a search (*)  and it immediately gave me some results. I then created a user in the PowerShell, and let Splunk search for the username, resulting in some lines regarding the user. So... eventually it works as it should...   With kind regards Gerd
Try FORMAT=$1 DEST=_MetaData:Index
The environment I'm monitoring has a large number of custom database metrics.  For those not familiar, these are queries run against the database by the appdynamics agent, that are then displayed in ... See more...
The environment I'm monitoring has a large number of custom database metrics.  For those not familiar, these are queries run against the database by the appdynamics agent, that are then displayed in custom dashboards.  This works great for us.  The problem is, our environment is complex, and frequently changing.  The Custom Metrics are currently maintained by hand (someone has to go in and modify them when the environment changes).  There is no import/export option in the UI.  I've read through the API that is available, but I'm not able to find a way to upload or download a custom database metric.  Alternately, is there a way to perform a variable substitution for the database server and value in the query? Anything that could make this less of a manual process.   Thanks
I'm trying to let Splunk Enterprise log some creation of a user on the same system as where Splunk is installed. My Splunk-version is 9.3.1. Alongside with this install, I've installed the latest Un... See more...
I'm trying to let Splunk Enterprise log some creation of a user on the same system as where Splunk is installed. My Splunk-version is 9.3.1. Alongside with this install, I've installed the latest Universal Forwarder (win) (on localhost 127.0.0.1). When installing: - I skip the SSL page - click "Next" - select "Local System" - click "Next" - check all items under "Windows Log Events" - click "Next" - generate an admin account and password - leave the "Deployment Server"-settings empty - enter "127.0.0.1:9997" as Host and port for "Receiving Indexer" - finish the installer Then I create a user (net user /add <user>) in CMD. After this step I return to Splunk Search and enter * as search criteria but nothing is found. Even when I enter the username (I added) the software finds nothing. Can someone tell me what I'm doing wrong or what the issue can be? Thanks! Gerd
@PickleRick Thanks for suggestion.  I made the following changes in transforms.conf: 1) For [new_sourcetype] - Removed SOURCE_KEY = source 2) For [route_to_teamid_index] - Updated the regex -... See more...
@PickleRick Thanks for suggestion.  I made the following changes in transforms.conf: 1) For [new_sourcetype] - Removed SOURCE_KEY = source 2) For [route_to_teamid_index] - Updated the regex - Set WRITE_META = true After these changes, the sourcetype value successfully changed to "aws:kinesis:starflow", but the data did not route to the specified index. Instead, it went to the default index. current configs: ----------------------------------------------------------------------------- props ----------------------------------------------------------------------------- #custom-props-for-starflow-logs [source::.../starflow-app-logs...] TRANSFORMS-set_new_sourcetype = new_sourcetype TRANSFORMS-set_route_to_teamid_index = route_to_teamid_index ----------------------------------------------------------------------------- transforms ----------------------------------------------------------------------------- #custom-transforms-for-starflow-logs [new_sourcetype] REGEX = .* DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = .*\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = source FORMAT = index::$1 DEST_KEY = _MetaData:Index WRITE_META = true I'm confident that both my props.conf and [new_sourcetype] stanza in transforms.conf are functioning correctly. The only issue seems to be with [route_to_teamid_index].
I've the below event, where I need to display only event which has action=test and category=testdata. test { line1: 1 "action": "test", "category": "testdata", } test1 { line1: 1 "action": ... See more...
I've the below event, where I need to display only event which has action=test and category=testdata. test { line1: 1 "action": "test", "category": "testdata", } test1 { line1: 1 "action": "event", "category": "testdata", } test2 { line1: 1 "action": "test", "category": "duplicate_data", }
I've an event looks like below. The Key value pair of "test:22345" exists more than 1 time and I need only this events in the output as table or count of events. {"test":"22345","testType":"mod... See more...
I've an event looks like below. The Key value pair of "test:22345" exists more than 1 time and I need only this events in the output as table or count of events. {"test":"22345","testType":"model"},{"test":"22345","testType":"model1"},{"test":"22345","testType":"model2"}
Wow this is exactly what I wanted!  I spent hours trying to figure this out.  Thanks again for the clear instructions
First things first. 1. Just for the sake of completness of the info - the logs are ingested by inputs on this HF? Not forwarded from remote? 2. To debug one thing at a time I'd start with something... See more...
First things first. 1. Just for the sake of completness of the info - the logs are ingested by inputs on this HF? Not forwarded from remote? 2. To debug one thing at a time I'd start with something foolproof like a simple SEDCMD adding a single letter to an event transform and attach it to a source. This way you're not wondering whether the props part is wrong or the transform itself. When you make sure the props entry is OK because your transform is actually getting called, get to debug your index overwriting.
I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is commu... See more...
I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is communicating and sending data but when I log into the web UI the forwarder is not listed   does anybody know what this might be?  the configs on all forwrders is the same.
Try something like this |stats latest(result), latest(eval(if(result="failed",_time,null()))) as last_failed, latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRu... See more...
Try something like this |stats latest(result), latest(eval(if(result="failed",_time,null()))) as last_failed, latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRuntime by test
It looks like you have your fields in the wrong order in the mvmap and possible your filter logic has been inverted. Try something like this | makeresults count=10 | eval string=mvindex(split("abc",... See more...
It looks like you have your fields in the wrong order in the mvmap and possible your filter logic has been inverted. Try something like this | makeresults count=10 | eval string=mvindex(split("abc",""),random()%3).mvindex(split("abc",""),random()%3).mvindex(split("abc",""),random()%3) | eval regex="abc|cba" | eval regex=split(regex,"|") | eval true=mvmap(regex,if(match(string,regex),regex,0)) | eval true=mvfilter(NOT true=0) | where isnotnull(true) If the random string is abc or cba you will get a result, if not, you won't
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This ta... See more...
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This table also contains a column that shows the last time a test ran (pass or fail). Here's a picture. Here's my current search: index="redacted" | rex field=runtime "^(?<seconds>\w*.\w*)" |stats latest(result), latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRuntime by test | eval averageRuntime=round(averageRuntime,0) | strcat averageRuntime f2 " seconds." field3 averageRuntime | `timesince(last_checked,last_checked)`   Any ideas or tips are greatly appreciated. Thanks in Advance.
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, a... See more...
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, and so far I have come up with this:     index="x" source="y" EndtoEnd | rex (?<e2e_p>\d+)ms \\Extracts the numerical value from the e2e_p field. | where isnotnull(e2e_p) | streamstats avg(e2e_p) window=1800 current=t time_window=30m as avg_e2e_p | where avg_e2e_p > 500     The condition doesn't happen often, but I'll work with the team that supports the app to simulate the condition once the query is finalized. I have never used streamstats before, but that's what has come up in my search for a means to incoporate a sliding window into a SPL query. Thank you in advance for taking the time to help with this.
Hi @Real_captain , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @PickleRick still it is not working.