All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There seems to be probably TZ issue with some other issues with your ingestion phase. If I recall right TZ are +/- 1h or x.5h difference with local time and UTC time. But your time difference didn’t ... See more...
There seems to be probably TZ issue with some other issues with your ingestion phase. If I recall right TZ are +/- 1h or x.5h difference with local time and UTC time. But your time difference didn’t match that. You must get your correct props.conf and also raw source event before it was ingested into splunk. With those we could help you.
We have a huge json array event, when I search for that event, search results shows a few missing values for a field. Any suggestion how to fix this issue, and have all values displayed for the field.
Steps Taken: 1. Installed Splunk Enterprise on all new servers. 2. Enabled clustering on the designated manager node. 3. Configured clustering on the new indexer, adding it as a peer node. 4. Ena... See more...
Steps Taken: 1. Installed Splunk Enterprise on all new servers. 2. Enabled clustering on the designated manager node. 3. Configured clustering on the new indexer, adding it as a peer node. 4. Enabled clustering and added the new server as a search head. After verifying that the newly added servers appeared on the manager node, I attempted to enable clustering on the existing standalone Splunk server and add it as a peer node. However, when I tried to restart the Splunk services, they wouldn't start. I had to remove the clustering stanza for the services to start successfully. I'm unsure where I went wrong or if I missed a step, but it seems that adding the standalone server to the newly created cluster prevents it from starting unless I remove the clustering stanza.
Are you able to find working values for the inputs of the app? It seems like you can enter in your Elasticsearch domain name, port, user, secret, interval, etc, then theoretically it should pull data... See more...
Are you able to find working values for the inputs of the app? It seems like you can enter in your Elasticsearch domain name, port, user, secret, interval, etc, then theoretically it should pull data from your elasticsearch instance. If you enter in the values but it does not work, then you could try searching your _internal index for keywords like "elasticsearch" to see if the app generates any errors that would explain why it is not pulling data from your elasticsearch instance.
I indexed this log in a new sourcetype on a test machine in the GMT+2 timezone, and the timestamp seems to have extracted properly. We would need to know what your timestamp settings in props.conf ar... See more...
I indexed this log in a new sourcetype on a test machine in the GMT+2 timezone, and the timestamp seems to have extracted properly. We would need to know what your timestamp settings in props.conf are to find out where the timestamp extraction is going wrong.  
Hi  @isoutamo,  The below is the raw event. I dont have access to props.conf. so just wanted to extract the time stamp from the raw event. 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR ... See more...
Hi  @isoutamo,  The below is the raw event. I dont have access to props.conf. so just wanted to extract the time stamp from the raw event. 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR  
@gcusello has a good solution but mind the typos: (space in fields cmd and "append") ... | fields - count | append [ | inputlookup compliance.csv | fields Solution Status ] ...  
For completeness, here's how I spliced them together, although I tried just adding your commands after my search, entirely, and after my search but without the addcoltotals and neither worked. ... See more...
For completeness, here's how I spliced them together, although I tried just adding your commands after my search, entirely, and after my search but without the addcoltotals and neither worked. | loadjob savedsearch="30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | untable index date size | eval date=strptime(date."-2024","%d-%b-%Y") | fieldformat date=strftime(date,"%F") | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size] | fields - date size relative_size | stats values(*) as * by index
When I add your processing to the end of mine I get a table that only has one column -- index.  None of the data is there.
In such cases with malfunctioning UI elements, I would recommend testing it with a different internet browser. Which browser are you using?
What you have on raw event and how you have define timestamp extraction on props.conf?
I recommend first running a search using only inputlookup to ensure that your IP addresses are returning properly: | inputlookup known_addresses.csv You should get a single column of addresses with... See more...
I recommend first running a search using only inputlookup to ensure that your IP addresses are returning properly: | inputlookup known_addresses.csv You should get a single column of addresses with the "ip" field name. ip 192.168.1.1 123.123.123.123 222.111.133.111 Then you can put it into a negated search filter in your main search: (I haven't checked your regex, so assume it works to create a field of "ip" with an ip address value.) index=myindex | rex field=_raw "(?<ip>\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b)" | search NOT [| inputlookup known_addresses.csv] | sort ip | table ip If the regex does not work, you can try this one: index=myindex | rex field=_raw "(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | search NOT [| inputlookup known_addresses.csv] | sort ip | table ip You may also want to put dedup at the end, to remove duplicate ip addresses: ... | dedup ip  
Currently working on data retention log collection policy to meet M-21-31  and not sure if the below config would meet the requirement Current Requirement:   Hot: 6 months   Warm: 24 months   ... See more...
Currently working on data retention log collection policy to meet M-21-31  and not sure if the below config would meet the requirement Current Requirement:   Hot: 6 months   Warm: 24 months    Cold:  18 months     Archive or Frozen: 18 months  with data ceiling and data deletion add these config to the Index Stanza to meet the above requirements If not please let me know what the setting and or config would look like  Index.conf  (add the below config to the Index Stanza)    maxHotSpanSecs = 15778476 - would provide around 6 months of hot bucket data     maxHotIdleSecs = 15778476 NOT sure about warm bucket setting to get 24 months of warm bucket data     coldPath.maxDataSizeMB = 47335428 - would provide around 18 months of cold bucket data     frozenTimePeriodInSecs = 47335428 - would provide around 18 months data archive / frozen data    coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive - send archive/froze to this location so it not deleted data     
Hello, I have time stamps that are not matching. How do I table the actual "Event log time stamp" ?   Splunk Time stamp Event log time stamp 8/14/24 4:29:21.000 AM 2024-08-13 17:49:23... See more...
Hello, I have time stamps that are not matching. How do I table the actual "Event log time stamp" ?   Splunk Time stamp Event log time stamp 8/14/24 4:29:21.000 AM 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR
Currently you must create an idea for this to ideas.splunk.com if there haven’t been that already.
At least some changes could found from index _configtracker.
I don't necessarily need the eval, I just need it to output to the extra field in the table.  Output by running the custom command looks like the following:  | nslookupsearch testcmd Output exampl... See more...
I don't necessarily need the eval, I just need it to output to the extra field in the table.  Output by running the custom command looks like the following:  | nslookupsearch testcmd Output example: 10.10.10.10
I have a csv with ip addresses. I would like to conduct a search for addresses that are NOT listed in that csv.  I was attempting the following but it does not render the results I was expecting. I... See more...
I have a csv with ip addresses. I would like to conduct a search for addresses that are NOT listed in that csv.  I was attempting the following but it does not render the results I was expecting. I want to search for ip addresses that are not in that list.           IE: unknown address...  Splunk Enterprise Security  index=myindex | rex "(?<ip>\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b)" | sort ip | table ip NOT [inputlookup known_addresses.csv]
You didn’t tell why you are needing eval. Can you show real output of your custom command?
I suspect you want to know about priority alerts, but how will Splunk magically know about this?  Its always better to give good context to the Splunk communiy, so what is P1C? and JMET sounds like ... See more...
I suspect you want to know about priority alerts, but how will Splunk magically know about this?  Its always better to give good context to the Splunk communiy, so what is P1C? and JMET sounds like some internal Splunk environment company code (which you should anonymise)  Unless you have say for instance in the saved search title name P1C, example, my_search_P1C, Splunk will not be able to find it or filter on it. Or you will need to use the eval command and for each saveded that you know is a P1C and assign a eval field called priority, but will require a lot of work.  Tip: As ever its always best practise to have good business naming conventions, makes things easier in the long run Example using makeresults to assign PC1 | makeresults count=2 | streamstats count as search_num | eval title=case(search_num=1, "my_savedsearch1", search_num=2, "my_savedsearch2") | eval priority=if(title=="my_savedsearch1", "P1C", null()) | fields - search_num