All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Will, Great idea, although I`m not having much success I`m afraid and the output field contains empty values. Lookup definition screenshot attached (the fieldnames are correct) - can you spo... See more...
Hi Will, Great idea, although I`m not having much success I`m afraid and the output field contains empty values. Lookup definition screenshot attached (the fieldnames are correct) - can you spot any issues ?
Hi @olahlala24  Without seeing the full search, I cant be use that the search you showed will have given you metrics when you ran mcollect. Here is a working example which you can tweak: index="_a... See more...
Hi @olahlala24  Without seeing the full search, I cant be use that the search you showed will have given you metrics when you ran mcollect. Here is a working example which you can tweak: index="_audit" search_id info total_run_time | stats count(search_id) as jobs avg(total_run_time) as latency by user | rename jobs as metric_name:jobs latency as metric_name:latency | mcollect index=mcollect_test To view data in your metric index you can do something like this: | mstats avg(_value) WHERE index=my_metric_index by metric_name span=1m or use mcatalog (not recommended other than for debugging etc | mcatalog values(metric_name) WHERE index=my_metric_index This will list all the available metrics in a given index. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @ITSplunk117  This depends on if your data has already been through a heavy forwarder (HF) or only a Universal Forwarder. If your data arrives at an indexer from a UF then you can indeed use the... See more...
Hi @ITSplunk117  This depends on if your data has already been through a heavy forwarder (HF) or only a Universal Forwarder. If your data arrives at an indexer from a UF then you can indeed use the approach you've suggested (see updated example below) == props.conf == [yourSourcetype] TRANSFORMS-updatesourcetype = updateMySourcetype == transforms.conf == [updateMySourcetype] REGEX = .* DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::yourNewSourcetype Regarding the FORMAT field, the word "sourcetype" is case-sensitive, the value after :: is less-so, the "DEST_KEY" needs to be as described above to update the sourcetype. If the data comes from a heavy forwarder then you might still be able to achieve this with INGEST_EVAL == props.conf == [yourSourcetype] TRANSFORMS-updatesourcetype = updateMySourcetype == transforms.conf == [updateMySourcetype] INGEST_EVAL = sourcetype:=if(match(_raw,"SomeRegexHere"),"SourceTypeA","SourceTypeB") = OR EVEN JUST == INGEST_EVAL = sourcetype:"NewSourcetypeName" Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Tell us what the requirements are and we may be able to tell you how to comply with them.
Hi @tomapatan  I've been having a play around with this and think I might have a solution for you...lets see... Ive created a lookup containing region, locality and postcode (see below). The region... See more...
Hi @tomapatan  I've been having a play around with this and think I might have a solution for you...lets see... Ive created a lookup containing region, locality and postcode (see below). The region has a high level postcode but the postcode becomes more details when a locality is specified. But what if locality is blank in the search events?? In this case it will return the postcode where the locality is "*"   It is important the the fields which are empty have an asterisks (*) in them as we will configure them as a wildcard (see below):   When configuring the lookup definition, set max matches to 1 so it only returns the first matching result (this will be the most specific result, else it will return the one with an "empty" (wildcard/asterisks) value. Set the Match type to WILDCARD(yourOptionalField1) You can set multiple wildcard fields here. It is important that the default lookup response for your commonField is lower than the more specific options. Then when we search it will look like this, note that when we specify a locality we get the more detailed postcode, when we omit the locality we only get the high level postcode. Does this help at all?? If this doesnt help, then worth checking this post to see if this helps at all? https://community.splunk.com/t5/Splunk-Search/Any-way-to-filter-multiple-wildcard-lookup-matches-to-narrowest/m-p/649565 Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Will, I'll give you the karma for this. Weird how the TA on Splunkbase kicked me out to here.  Could be user error on my part. Thanks for your help.
Hi @Vin  Please could you share some of your raw events so that we can help you further? In the meantime, you might have some success with something like this? | rex field=_raw max_match=0 "(?<num... See more...
Hi @Vin  Please could you share some of your raw events so that we can help you further? In the meantime, you might have some success with something like this? | rex field=_raw max_match=0 "(?<numbers>\d+)" Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Below is the search and I need to extract the ID's shown in the below event and there are also many other ID's. Please help me in writing a query to extract the ID's which starts with "Duplicate Id's... See more...
Below is the search and I need to extract the ID's shown in the below event and there are also many other ID's. Please help me in writing a query to extract the ID's which starts with "Duplicate Id's that needs to be displayed ::::::[6523409, 6529865]" in the log file.   index="*" source ="*"  "Duplicate Id's that needs to be displayed ::::::[6523409, 6529865]
I’m working on a Splunk search that needs to perform a lookup against a CSV file. The challenge is that some of the fields in the lookup table contain empty values, meaning an exact match doesn’t wor... See more...
I’m working on a Splunk search that needs to perform a lookup against a CSV file. The challenge is that some of the fields in the lookup table contain empty values, meaning an exact match doesn’t work. Here’s a simplified version of my search:   index="main" eventType="departure" | table _time commonField fieldA fieldB fieldC fieldD fieldE | lookup reference_data.csv commonField fieldA fieldB fieldC fieldD fieldE OUTPUTNEW offset   The lookup file reference_data.csv contains the fields: commonField , fieldA, fieldB, fieldC, fieldD, fieldE, lookupValue. Sometimes, fieldB, fieldC, or other fields in the lookup table are empty. fieldA always has a value, sometimes the same, but the value of the offset field changes based on the values of the other fields. If a lookup row has an empty value for fieldB, I still want it to match based on the available fields. What I've Tried: Using lookup normally, but it requires all fields to match exactly, which fails when lookup fields are empty. Creating multiple lookup commands for different field combinations, but this isn’t scalable. Desired Outcome: If commonField  matches, but fieldB is empty in the lookup file, I still want the lookup to return lookupValue. The lookup should prioritize rows with the most matching fields but still work even if some fields are missing.   Is there a way to perform a lookup in Splunk that allows matches even when some lookup fields are empty?
Is there anyone familiar with any guidance on fulfilling the logging requirements for CTO 24-003 with splunk queries and dashboard  
Found the issue: We built a standalone SH and copied the $SPLUNK_HOME/etc/apps directory from the SHC to it.  Started removing apps on the test server, one at a time, and when we removed one of the... See more...
Found the issue: We built a standalone SH and copied the $SPLUNK_HOME/etc/apps directory from the SHC to it.  Started removing apps on the test server, one at a time, and when we removed one of the Apps and restarted., the searches started to work again. One of our crew found the following in the app the was just removed: [source::stream:Gigamon] EVAL-_time = strptime('timestamp', "%Y-%m-%dT%H:%M:%S,%N") This seems to be the issue. We went back to the SHC and specified a source without removing anything and it pulled data. Not really clear on why that would make a difference, but it does. The main takeaway from this is that a configuration change that had an effect on _time caused this issue.
Hello, I'm to try changing the sourcetype at the indexer level based on the source.  First question is that possible on an indexer.   Second would it work with props.conf referencing the transforms... See more...
Hello, I'm to try changing the sourcetype at the indexer level based on the source.  First question is that possible on an indexer.   Second would it work with props.conf referencing the transforms   transforms.conf testchange REGEX = .+ FORMAT = Sourcetype::testsourcetype WRITE_META = true   thanks
Hey all, I am new to Splunk Enterprise and I would like to understand more about metrics and the use of metric indexes. So far, I have created my own metric index by going to Settings > Indexing. I ... See more...
Hey all, I am new to Splunk Enterprise and I would like to understand more about metrics and the use of metric indexes. So far, I have created my own metric index by going to Settings > Indexing. I have a bunch of Splunk Rules I have created and so far I have used the mcollect command to use the following: host= (ip address) source=(source name) | mcollect index=(my_metric_index) I am able to get a list of event logs showing on the Splunk dashboard , but I am not sure if the results showing on the Search and Reporting is being stored under my metric index. When I try to check under the Indexing Tab, my associated metric index is still at "0 MB" indicating no data  Is there anyway somone can help? Is it my index that needs work? Is it my search string query?    
Unfortunately, the log do not have the string "gets created in system"," gets modified..." or get whatever. The only information we see in the logs are  _time tradeNumber received _time tradeNumbe... See more...
Unfortunately, the log do not have the string "gets created in system"," gets modified..." or get whatever. The only information we see in the logs are  _time tradeNumber received _time tradeNumber sent _time tradeNumber received _time tradeNumber sent _time tradeNumber received _time tradeNumber sent
Hello, and I have another weird issue: When I execute a search on a SHC in the Search and Reporting App, getting data from 2025-02-27 index=test earliest=-7d@d latest=-6d@d I get zero events When... See more...
Hello, and I have another weird issue: When I execute a search on a SHC in the Search and Reporting App, getting data from 2025-02-27 index=test earliest=-7d@d latest=-6d@d I get zero events When I execute the search WITHOUT the earliest and latest time modifiers and use the Time Picker in the UI which results in "during Thu, Feb 27, 2025" I get around 167,153 results Specifying the time range with earliest and latest time modifiers is NOT giving me the "Your timerange was substituted based on your search string". If I use tstats, I get the correct number of events, the correct date, and the message "Your timerange was substituted based on your search string" is present | tstats count where index=test earliest=-7d@d latest=-6d@d by _time span=d I also made index=test earliest=-7d@d latest=-6d@d a saved search which executes every 10 minutes - zero events. Another bit of weirdness: If I run that search, and specify "All time", it will pull events ONLY for 2025-02-27. Nothing for other dates, and it has 12 months of events, populated for every day. So, it looks at both the time qualifiers and the time picker under that scenario. Any ideas what might be causing this? (I have several standalone searchheads that are working fine)
Hi @DPOIRE , you have to extract the correct delays and then use them as you like: <your_search> | stats earliest(eval(if(searchmatch("gets created in system"),_time,""))) AS gets_created_in_... See more...
Hi @DPOIRE , you have to extract the correct delays and then use them as you like: <your_search> | stats earliest(eval(if(searchmatch("gets created in system"),_time,""))) AS gets_created_in_system latest(eval(if(searchmatch("gets sent to market"),_time,""))) AS gets_sent_to_market earliest(eval(if(searchmatch("gets modified in system"),_time,""))) AS gets_modified_in_system latest(eval(if(searchmatch("gets sent to market with modification"),_time,""))) AS gets_sent_to_market_with_modification earliest(eval(if(searchmatch("gets cancelled in system"),_time,""))) AS gets_cancelled_in_system latest(eval(if(searchmatch("gets sent to market as cancelled"),_time,""))) AS gets_sent_to_market_as_cancelled BY TradeNumber In this way you'll have the epochtime of each event in the same row and you can calculate all the diffs you need. Ciao. Giuseppe
Hi @Priya70 , in this case the issue is in the verification algorithm not in the search! Apply the correct adapt for your data. Ciao. Giuseppe
Hi @Jailson , the timepicker works only on _time and not on a field like deletion_date. If you want to filter your data using this filter you have to add it to the main search. In addition after t... See more...
Hi @Jailson , the timepicker works only on _time and not on a field like deletion_date. If you want to filter your data using this filter you have to add it to the main search. In addition after the top command you have only the fields in the command, in your case: categoryId, perc, count. If you want to filter your data for deletion_date, you have to put this filter in the main search or before the top command, obviously, if you have this field in your data. The syntax depends on the format of yor deletion_date field, e.g. if it's in format "yyyy-mm-dd" and you want to filter results if deletion_date>2024-12-31, you should use something like this: sourcetype=access_* status=200 action=purchase | eval deletion_date_epoch=strptime(deletion_date,"%Y-%m-%d"), deletion_date_filter_epoch=strptime("2024-12-31","%Y-%m-%d") | where deletion_date_epoch>deletion_date_filter_epoch | top categoryId Ciao. Giuseppe
@DPOIRE  Simulates trade events using makeresults, assigns timestamps, and labels each step (New Order, Modification, Cancellation). Uses streamstats to track event sequence, capture previous timest... See more...
@DPOIRE  Simulates trade events using makeresults, assigns timestamps, and labels each step (New Order, Modification, Cancellation). Uses streamstats to track event sequence, capture previous timestamps, and calculate time delay for each step.    
@Jailson  What exactly are you looking for? Could you elaborate a bit more?