All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have all those servers exactly same splunk version? You said that you also add SH to this cluster. What you are actually meaning with this? What you are finding old indexer’s splunkd.log after you ... See more...
Have all those servers exactly same splunk version? You said that you also add SH to this cluster. What you are actually meaning with this? What you are finding old indexer’s splunkd.log after you try to add it as a cluster peer? And how you are adding it into cluster (cli, edit config files)?
Is there a software I need to use in addition to Splunk to achieve this? If so, do you have any suggestions?
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having =... See more...
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having = my dashboard is NOT updating correctly. It is set to refresh every 15 minutes but when it does this, it takes the dashboard out of full screen which I do not want (shows my tabs and apps rather than just the dashboard) Questions:--> How can I ensure when Splunk webpage refreshes through the browser, the dashboard is refreshed/reset in full screen? Thank you
There seems to be probably TZ issue with some other issues with your ingestion phase. If I recall right TZ are +/- 1h or x.5h difference with local time and UTC time. But your time difference didn’t ... See more...
There seems to be probably TZ issue with some other issues with your ingestion phase. If I recall right TZ are +/- 1h or x.5h difference with local time and UTC time. But your time difference didn’t match that. You must get your correct props.conf and also raw source event before it was ingested into splunk. With those we could help you.
We have a huge json array event, when I search for that event, search results shows a few missing values for a field. Any suggestion how to fix this issue, and have all values displayed for the field.
Steps Taken: 1. Installed Splunk Enterprise on all new servers. 2. Enabled clustering on the designated manager node. 3. Configured clustering on the new indexer, adding it as a peer node. 4. Ena... See more...
Steps Taken: 1. Installed Splunk Enterprise on all new servers. 2. Enabled clustering on the designated manager node. 3. Configured clustering on the new indexer, adding it as a peer node. 4. Enabled clustering and added the new server as a search head. After verifying that the newly added servers appeared on the manager node, I attempted to enable clustering on the existing standalone Splunk server and add it as a peer node. However, when I tried to restart the Splunk services, they wouldn't start. I had to remove the clustering stanza for the services to start successfully. I'm unsure where I went wrong or if I missed a step, but it seems that adding the standalone server to the newly created cluster prevents it from starting unless I remove the clustering stanza.
Are you able to find working values for the inputs of the app? It seems like you can enter in your Elasticsearch domain name, port, user, secret, interval, etc, then theoretically it should pull data... See more...
Are you able to find working values for the inputs of the app? It seems like you can enter in your Elasticsearch domain name, port, user, secret, interval, etc, then theoretically it should pull data from your elasticsearch instance. If you enter in the values but it does not work, then you could try searching your _internal index for keywords like "elasticsearch" to see if the app generates any errors that would explain why it is not pulling data from your elasticsearch instance.
I indexed this log in a new sourcetype on a test machine in the GMT+2 timezone, and the timestamp seems to have extracted properly. We would need to know what your timestamp settings in props.conf ar... See more...
I indexed this log in a new sourcetype on a test machine in the GMT+2 timezone, and the timestamp seems to have extracted properly. We would need to know what your timestamp settings in props.conf are to find out where the timestamp extraction is going wrong.  
Hi  @isoutamo,  The below is the raw event. I dont have access to props.conf. so just wanted to extract the time stamp from the raw event. 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR ... See more...
Hi  @isoutamo,  The below is the raw event. I dont have access to props.conf. so just wanted to extract the time stamp from the raw event. 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR  
@gcusello has a good solution but mind the typos: (space in fields cmd and "append") ... | fields - count | append [ | inputlookup compliance.csv | fields Solution Status ] ...  
For completeness, here's how I spliced them together, although I tried just adding your commands after my search, entirely, and after my search but without the addcoltotals and neither worked. ... See more...
For completeness, here's how I spliced them together, although I tried just adding your commands after my search, entirely, and after my search but without the addcoltotals and neither worked. | loadjob savedsearch="30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | untable index date size | eval date=strptime(date."-2024","%d-%b-%Y") | fieldformat date=strftime(date,"%F") | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size] | fields - date size relative_size | stats values(*) as * by index
When I add your processing to the end of mine I get a table that only has one column -- index.  None of the data is there.
In such cases with malfunctioning UI elements, I would recommend testing it with a different internet browser. Which browser are you using?
What you have on raw event and how you have define timestamp extraction on props.conf?
I recommend first running a search using only inputlookup to ensure that your IP addresses are returning properly: | inputlookup known_addresses.csv You should get a single column of addresses with... See more...
I recommend first running a search using only inputlookup to ensure that your IP addresses are returning properly: | inputlookup known_addresses.csv You should get a single column of addresses with the "ip" field name. ip 192.168.1.1 123.123.123.123 222.111.133.111 Then you can put it into a negated search filter in your main search: (I haven't checked your regex, so assume it works to create a field of "ip" with an ip address value.) index=myindex | rex field=_raw "(?<ip>\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b)" | search NOT [| inputlookup known_addresses.csv] | sort ip | table ip If the regex does not work, you can try this one: index=myindex | rex field=_raw "(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | search NOT [| inputlookup known_addresses.csv] | sort ip | table ip You may also want to put dedup at the end, to remove duplicate ip addresses: ... | dedup ip  
Currently working on data retention log collection policy to meet M-21-31  and not sure if the below config would meet the requirement Current Requirement:   Hot: 6 months   Warm: 24 months   ... See more...
Currently working on data retention log collection policy to meet M-21-31  and not sure if the below config would meet the requirement Current Requirement:   Hot: 6 months   Warm: 24 months    Cold:  18 months     Archive or Frozen: 18 months  with data ceiling and data deletion add these config to the Index Stanza to meet the above requirements If not please let me know what the setting and or config would look like  Index.conf  (add the below config to the Index Stanza)    maxHotSpanSecs = 15778476 - would provide around 6 months of hot bucket data     maxHotIdleSecs = 15778476 NOT sure about warm bucket setting to get 24 months of warm bucket data     coldPath.maxDataSizeMB = 47335428 - would provide around 18 months of cold bucket data     frozenTimePeriodInSecs = 47335428 - would provide around 18 months data archive / frozen data    coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive - send archive/froze to this location so it not deleted data     
Hello, I have time stamps that are not matching. How do I table the actual "Event log time stamp" ?   Splunk Time stamp Event log time stamp 8/14/24 4:29:21.000 AM 2024-08-13 17:49:23... See more...
Hello, I have time stamps that are not matching. How do I table the actual "Event log time stamp" ?   Splunk Time stamp Event log time stamp 8/14/24 4:29:21.000 AM 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR
Currently you must create an idea for this to ideas.splunk.com if there haven’t been that already.
At least some changes could found from index _configtracker.
I don't necessarily need the eval, I just need it to output to the extra field in the table.  Output by running the custom command looks like the following:  | nslookupsearch testcmd Output exampl... See more...
I don't necessarily need the eval, I just need it to output to the extra field in the table.  Output by running the custom command looks like the following:  | nslookupsearch testcmd Output example: 10.10.10.10