All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have arguments for my macro that contain other values e.g. $env:user$ and $timepicker.earliest$/$timepicker.latest$. How do I include these in my macro definition as it doesn't allow me since macro... See more...
I have arguments for my macro that contain other values e.g. $env:user$ and $timepicker.earliest$/$timepicker.latest$. How do I include these in my macro definition as it doesn't allow me since macro arguments must only contain alphanumeric, '_' and '-' characters?    
should use OR condition to include all hosts....host="srv004" OR host="srv005" OR
1) is ur Power user certification already expired or nearing expiration? if power user once expired, then you can not plan for admin exam (just recently i finished my admin exam, pls check the attac... See more...
1) is ur Power user certification already expired or nearing expiration? if power user once expired, then you can not plan for admin exam (just recently i finished my admin exam, pls check the attachment, thanks. )   From https://www.splunk.com/en_us/resources/splunk-recertification-policy.html EXAMPLE 1: Candidate holds Splunk Core Certified Power User, with a badge earned on January 1, 2019. ( Note - the Power User Certification expiry date will be --- 3 years ---- Jan 1st 2021 ) Candidate may retake the Core Power User exam between January 2, 2021 and January 1, 2022 to recertify at this level. The candidate’s Core Power User (and Core User, if held) certification(s) will be updated with a new expiration date, three years from the date of Core Power User badge re-issuance. 1) is ur Power user certification already expired or nearing expiration? if power user once expired, then you can not plan for admin exam.  2) simply try to book admin exam thru the pearson website, while doing so, you will be given a list of available, approved exams. 
Hi, I have the same issue. Someone has found a solution?
This is difficult to diagnose without sight of your events and the search you are currently using. It is possible that you are hitting some sort of limit but where that might be is almost impossible ... See more...
This is difficult to diagnose without sight of your events and the search you are currently using. It is possible that you are hitting some sort of limit but where that might be is almost impossible to determine without further information.
It is a bit difficult to diagnose what might be wrong without sight of your data. Please could you share some sample representative events (anonymised as necessary) from your loadjob
Have all those servers exactly same splunk version? You said that you also add SH to this cluster. What you are actually meaning with this? What you are finding old indexer’s splunkd.log after you ... See more...
Have all those servers exactly same splunk version? You said that you also add SH to this cluster. What you are actually meaning with this? What you are finding old indexer’s splunkd.log after you try to add it as a cluster peer? And how you are adding it into cluster (cli, edit config files)?
Is there a software I need to use in addition to Splunk to achieve this? If so, do you have any suggestions?
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having =... See more...
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having = my dashboard is NOT updating correctly. It is set to refresh every 15 minutes but when it does this, it takes the dashboard out of full screen which I do not want (shows my tabs and apps rather than just the dashboard) Questions:--> How can I ensure when Splunk webpage refreshes through the browser, the dashboard is refreshed/reset in full screen? Thank you
There seems to be probably TZ issue with some other issues with your ingestion phase. If I recall right TZ are +/- 1h or x.5h difference with local time and UTC time. But your time difference didn’t ... See more...
There seems to be probably TZ issue with some other issues with your ingestion phase. If I recall right TZ are +/- 1h or x.5h difference with local time and UTC time. But your time difference didn’t match that. You must get your correct props.conf and also raw source event before it was ingested into splunk. With those we could help you.
We have a huge json array event, when I search for that event, search results shows a few missing values for a field. Any suggestion how to fix this issue, and have all values displayed for the field.
Steps Taken: 1. Installed Splunk Enterprise on all new servers. 2. Enabled clustering on the designated manager node. 3. Configured clustering on the new indexer, adding it as a peer node. 4. Ena... See more...
Steps Taken: 1. Installed Splunk Enterprise on all new servers. 2. Enabled clustering on the designated manager node. 3. Configured clustering on the new indexer, adding it as a peer node. 4. Enabled clustering and added the new server as a search head. After verifying that the newly added servers appeared on the manager node, I attempted to enable clustering on the existing standalone Splunk server and add it as a peer node. However, when I tried to restart the Splunk services, they wouldn't start. I had to remove the clustering stanza for the services to start successfully. I'm unsure where I went wrong or if I missed a step, but it seems that adding the standalone server to the newly created cluster prevents it from starting unless I remove the clustering stanza.
Are you able to find working values for the inputs of the app? It seems like you can enter in your Elasticsearch domain name, port, user, secret, interval, etc, then theoretically it should pull data... See more...
Are you able to find working values for the inputs of the app? It seems like you can enter in your Elasticsearch domain name, port, user, secret, interval, etc, then theoretically it should pull data from your elasticsearch instance. If you enter in the values but it does not work, then you could try searching your _internal index for keywords like "elasticsearch" to see if the app generates any errors that would explain why it is not pulling data from your elasticsearch instance.
I indexed this log in a new sourcetype on a test machine in the GMT+2 timezone, and the timestamp seems to have extracted properly. We would need to know what your timestamp settings in props.conf ar... See more...
I indexed this log in a new sourcetype on a test machine in the GMT+2 timezone, and the timestamp seems to have extracted properly. We would need to know what your timestamp settings in props.conf are to find out where the timestamp extraction is going wrong.  
Hi  @isoutamo,  The below is the raw event. I dont have access to props.conf. so just wanted to extract the time stamp from the raw event. 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR ... See more...
Hi  @isoutamo,  The below is the raw event. I dont have access to props.conf. so just wanted to extract the time stamp from the raw event. 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR  
@gcusello has a good solution but mind the typos: (space in fields cmd and "append") ... | fields - count | append [ | inputlookup compliance.csv | fields Solution Status ] ...  
For completeness, here's how I spliced them together, although I tried just adding your commands after my search, entirely, and after my search but without the addcoltotals and neither worked. ... See more...
For completeness, here's how I spliced them together, although I tried just adding your commands after my search, entirely, and after my search but without the addcoltotals and neither worked. | loadjob savedsearch="30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | untable index date size | eval date=strptime(date."-2024","%d-%b-%Y") | fieldformat date=strftime(date,"%F") | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size] | fields - date size relative_size | stats values(*) as * by index
When I add your processing to the end of mine I get a table that only has one column -- index.  None of the data is there.
In such cases with malfunctioning UI elements, I would recommend testing it with a different internet browser. Which browser are you using?