All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello thank you,  yes i am keeping it on the indexer. regarding that a quick query , its a cloud environment(classic) & i am keeping props&transforms on the splunk cloud indexers , if we drop th... See more...
@gcusello thank you,  yes i am keeping it on the indexer. regarding that a quick query , its a cloud environment(classic) & i am keeping props&transforms on the splunk cloud indexers , if we drop these events from splunkcloud indxers using props&tranforms would it still count against SVCs? I am asking this because the null queue would happen after parsing so the processing is happening. in on-prem as far as i know it wont count against licensing because indexing wont happen, how does it work in splunkcloud
Below is the complete xml. here i am not getting how to add the token values to the other panels in the dashboard. Can you help me on that <dashboard> <label> Dashboard title</label> <row> ... See more...
Below is the complete xml. here i am not getting how to add the token values to the other panels in the dashboard. Can you help me on that <dashboard> <label> Dashboard title</label> <row> <panel> <table depends="$hide$"> <title>$Time_Period_Start$ $Time_Period_End$</title> <search> <query>| makeresults | addinfo | eval SearchStart = strftime(info_min_time, "%Y-%m-%d %H:%M:%S"), SearchEnd = strftime(info_max_time, "%Y-%m-%d %H:%M:%S") | table SearchStart, SearchEnd</query> <earliest>-7d@d</earliest> <latest>@d</latest> <done> <set token="Time_Period_Start">$result.SearchStart$</set> <set token="Time_Period_End">$result.SearchEnd$</set> </done> </search> </table> </panel> </row> <row> <panel> <title>first panel</title> <single> <search> <query>|tstats count as internal_logs where index=_internal </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>second panel</title> <single> <search> <query>|tstats count as audit_logs where index=_audit </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>Third panel</title> <single> <search> <query>|tstats count as main_logs where index=main </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </dashboard>
Use the mvfind function. | eval present=if(isnotnull(mvfind(DNS_Matched, DNS)),"yes", "no")  
I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matache... See more...
I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matached host1 host1 host1-a host1-r host2 host2 host2-a host2-r
Hi @Sid, at first, if you use sourcetype in te stanza header, you don't need to specify sourcetype: [risktrac_log] TRANSFORMS-null=setnull then use an easier regex: [setnull] REGEX = DEBUG DEST_K... See more...
Hi @Sid, at first, if you use sourcetype in te stanza header, you don't need to specify sourcetype: [risktrac_log] TRANSFORMS-null=setnull then use an easier regex: [setnull] REGEX = DEBUG DEST_KEY = queue FORMAT = nullQueue At least, where do you located these conf files? they must be in the first full Splunk instance that the logs passing through, in other words on the first Heavy Forwarders or, if not present, on the Indexers, not on the Universal Forwarders. Ciao. Giuseppe
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FOR... See more...
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FORMAT = nullQueue props.conf- [sourcetype::risktrac_log] TRANSFORMS-null=setnull I used  REGEX=\[\d{2}\/\d{2}\/\d{2}\s\d{2}:\d{2}:\d{2}:\d{3}\sEDT]\s+DEBUG\s.* as well but that too doesnt drop DEBUG messages  i just tried DEBUG in regex too, no help, can someone help me here please? sample event-  [10/13/23 03:46:48:551 EDT] DEBUG DocumentCleanup.run 117 : /_documents document cleanup complete. how does REGEX pick the pattern ? i can see both the REGEX are able to match whole event. we cant turn DEBUG off for the application
That worked. Thank you very much!! 
As a followup, I tried using the following timestamp settings instead. This regex matches on the JSON up to the record.time.timestamp field, and in Settings -> Add Data it also correctly sets the _ti... See more...
As a followup, I tried using the following timestamp settings instead. This regex matches on the JSON up to the record.time.timestamp field, and in Settings -> Add Data it also correctly sets the _time field for all my test data: TIME_PREFIX = \"time\":\s*{.*\"timestamp\":\s TIME_FORMAT = %s.%6N This also fails to properly parse the data when ingested through the Universal Forwarder
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this fo... See more...
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this format:   {"text": "Ending run - duration 0:00:00.249782\n", "record": {"elapsed": {"repr": "0:00:00.264696", "seconds": 0.264696}, "exception": null, "extra": {"run_id": "b20xlqbi", "action": "status"}, "file": {"name": "alb-handler.py", "path": "scripts/alb-handler.py"}, "function": "exit_handler", "level": {"icon": "", "name": "INFO", "no": 20}, "line": 79, "message": "Ending run - duration 0:00:00.249782", "module": "alb-handler", "name": "__main__", "process": {"id": 28342, "name": "MainProcess"}, "thread": {"id": 140068303431488, "name": "MainThread"}, "time": {"repr": "2023-10-13 10:09:54.452713-04:00", "timestamp": 1697206194.452713}}}   Long story short, it seems that Splunk is getting confused by the multiple fields in the JSON that look like timestamps. The timestamp that should be used is the very last field in the JSON. I first set up a custom sourcetype that's a clone of the _json sourcetype by manually inputting some of these records via Settings -> Add Data.  Using that tool I was able to get Splunk to recognize the correct timestamp via the following settings:   TIMESTAMP_FIELDS = record.time.timestamp TIME_FORMAT = %s.%6N     When I load the above record by hand via Settings -> Add Data and use my custom sourcetype with the above fields then Splunk shows the _time field is being set properly,  so in this case it's 10/13/23 10:09:54.452 AM. The exact same record, when loaded through the Universal Forwarder, appears to be ignoring the TIMESTAMP_FIELDS parameter. It ends up with a date/time of 10/13/23 12:00:00.249 AM, which indicates that it's trying to extract the date/time from the "text" field at the very beginning of the JSON (the string "duration 0:00:00.249782"). The inputs.conf on the Universal Forwarder is quite simple:   [monitor:///app/appman/logs/combined_log.json] sourcetype = python-loguru index = test disabled = 0     Why is the date/time parsing working properly when I manually load these logs via the UI but not when being imported via the Universal Forwarder?
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configure... See more...
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configured correctly in the props.conf, fields.conf and transforms.conf, but is failing when I attempt to use a conditional statement. My goal is to do something like this in my transforms.conf: [ingest_time_timestamp] INGEST_EVAL = ingest_time_stamp:=if(_time > time(), time(), _time) If _time is in the future, then I want it set to the current time, otherwise I want to leave it alone. Anyone have any ideas?
What you need is everything between the quotation marks.  Try this | rex "Sample ID\\\":\\\"(?<SampleID>[^\"]+)"
Hi @jbanAtSplunk, this means that you require more Indexers: at least 5. About Storage, if the RF and SF is 2, you have 5 Indexers and you'll have Contingency=10%, you'll have: Total_Storage = (Li... See more...
Hi @jbanAtSplunk, this means that you require more Indexers: at least 5. About Storage, if the RF and SF is 2, you have 5 Indexers and you'll have Contingency=10%, you'll have: Total_Storage = (License*Retention*0.5*SF) (1 + Contingency) + License*3.4 = (500*30*0.5*2)*1.1 +1700 = 18200 Storage per Indexer = 18200/5 = 3640 GB per Indexer (License*3.4 is datamodels' storage for ES) Ciao. Giuseppe
I want to extract Sample ID field value "Sample ID":"020ab888-a7ce-4e25-z8h8-a658bf21ech9"
Yes the latest version definitely fixes this and AFAIK is a good, stable version too with lots of other bug fixes.
That was one of our steps in the decommissioning process we were using. Removing the host from the cluster peers didn't remove them from whatever list the Health Reporter component is using on the se... See more...
That was one of our steps in the decommissioning process we were using. Removing the host from the cluster peers didn't remove them from whatever list the Health Reporter component is using on the search heads. They were definitely removed - looking at Settings -> Distributed Search -> Search Peers clearly shows them not being present. Yet the Health Reporter alerts still complains about a lack of connectivity to the decommissioned Search Peer. It appears the only solution to reload whatever list the Health Reporter has internally is to restart the Splunk service on the Search Head. Or to disable the Health Reporter component for Search Peer connectivity entirely - there's no half measures or custom lists in the health.conf file.
Currently [default] repFactor = auto Search factor is from default. so it's 2   ESS is Splunk Enterprise Security (on it's own SH), no Other Premium Apps
Hi @jbanAtSplunk , storage on Indexer Cluster depends on Replication and Search Factor, what are they? What's ESS? have you Premium Apps? Ciao. Giuseppe
Will check reference. We already have 1 X SH, 1 X ESS, 2 x indexers in cluster, 1 x Deployment server. But license was 4 times smaller, now as we will expand license I am looking what we need to ... See more...
Will check reference. We already have 1 X SH, 1 X ESS, 2 x indexers in cluster, 1 x Deployment server. But license was 4 times smaller, now as we will expand license I am looking what we need to expand (storage, cpu, ram) and how much. probably, will go to 4 indexers (from 2) and will expand 2.5TB per indexer to 7.5TB per indexer.
Actually, I have 2 separate events start event one unique ID and few other fields for exampled = "Job initiated"  if the events contains  "JOB initiated" , that means the evets is first event. and... See more...
Actually, I have 2 separate events start event one unique ID and few other fields for exampled = "Job initiated"  if the events contains  "JOB initiated" , that means the evets is first event. and if the events contains "JOB Completed" that means the last event. so, I want to calculate how much total time taken for that particular Job ID to complete ?
Hi @jbanAtSplunk, this isn't a question for the Community but for a Splunk Architect. Anyway, there are many other parameters to answer to your question: is there an Indexer Cluster, if yes what... See more...
Hi @jbanAtSplunk, this isn't a question for the Community but for a Splunk Architect. Anyway, there are many other parameters to answer to your question: is there an Indexer Cluster, if yes what's the Search Factor and The Replication Factor? is there a Search Head Cluster, are there Premium App as Enterprise Security or ITSI? how many concurrent users you foresee in the system? are there scheduled searches? Anyway, if you don't have ES or ITSI, you couls use around 3 Indexers. If you don't have a Search Head Cluster you can use one Search Head, if you have a Search Head Cluster you need at least three SHs and a Deployer, If you have an Indexer Cluster you need at least 3 Indexers and one Cluster Manager. If you have ES or ITSI the resources are completely different! For storage: if you don't have an Indexer Cluster you could consider: Storage = License*retention*0.5 = 500*30*0.5 = 7500 GB If you have an indexer Cluster the required storage depends on the above factors. About CPUs and RAMs: they depends on: presence of Premium App, number of concurrent users number of scheduled searches, so I cannot help you without these information, the only hint is to see at this url the reference hardware: https://docs.splunk.com/Documentation/Splunk/9.1.1/Capacity/Referencehardware  Ciao. Giuseppe