All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Rao_KGY , could you share your dashboard code? maybe there something else. Ciao. Giuseppe
Hi @sigma , as @richgalloway said, on Linux usually Splunk is installed on /opt and it's a best practice to ha file system separated from root and this location is configured in an enviromental vari... See more...
Hi @sigma , as @richgalloway said, on Linux usually Splunk is installed on /opt and it's a best practice to ha file system separated from root and this location is configured in an enviromental variable called %SPLUNK_HOME. For data it's possible to setup a variable (called $SPLUNK_DB) that indicates the location of the file system containing the data folders. not the $SPLUNK_HOME/var folder, that's a best practice to set up in a different and larger file system. So you can go in $SPLUNK_HOME/etc/splunk-launch.conf and configure $SPLUNK_HOME variable for your system. Obviously this action is only for Indexers or stand-alone Splunk systems, not for the other roles. Ciao. Giuseppe
Hi @richgalloway , Thank you for the support. Thanks
Hey @gcusello @ITWhisperer  Thanks for the information. JFYI I'm using same timeframe (i.e. 24Hrs ) for both the panel & span is also same 1hr.  @gcusello as per your suggestion I tried "timechart" ... See more...
Hey @gcusello @ITWhisperer  Thanks for the information. JFYI I'm using same timeframe (i.e. 24Hrs ) for both the panel & span is also same 1hr.  @gcusello as per your suggestion I tried "timechart" but again same issue, "NO Result Found". But same query is working fine while putting it in separate search.  And you're right I shouldn't use "table" command but since nothing was working so just for workaround I tried to use it. 
You can change _time (or any field) in a query, but it doesn't change the indexed data (nothing does).
I was stuck on trying to get Transaction to work   It was on my list of things to do, to write it similar to the way you did but had not had the time to get to it.  I ran a few tests and appears ... See more...
I was stuck on trying to get Transaction to work   It was on my list of things to do, to write it similar to the way you did but had not had the time to get to it.  I ran a few tests and appears to solve the 'issue' I dont know the specifics but I guess trying to 'alter' _time really does not change the underlying value ?
Here's a method that often works.  Search for Start and Completed events, keeping only the most recent for each host and job.  Then discard all of the Completed events.  What's left will be a list of... See more...
Here's a method that often works.  Search for Start and Completed events, keeping only the most recent for each host and job.  Then discard all of the Completed events.  What's left will be a list of uncompleted jobs. This approach will fail if the Start and Complete events are at the exact time and in the wrong order. index=anIndex sourcetype=aSourcetype (aJob1 OR aJob2 OR aJob3) AND ("START of script" OR "COMPLETED OK" OR "ABORTED, exiting with status" ) | dedup host aJobName | search "START of script" | rex field=_raw "Batch::(?<aJobName>[^\s]*)" | sort _time | eval aDay = strftime(_time, "%a. %b. %e, %Y") | eval aStartTime=strftime(_time, "%H:%M:%S %p") | eval aDuration=tostring((now()-_time), "duration") | eval aEndTime = "--- Running ---" | table aHostName aDay aJobName aStartTime aEndTime aDuration
I'm not sure everything needed can be done from the CLI, but let's try. Let's take the easier option of making the dashboard private to the user with the subject email address.  I assume this is not... See more...
I'm not sure everything needed can be done from the CLI, but let's try. Let's take the easier option of making the dashboard private to the user with the subject email address.  I assume this is not a search head cluster. 1) Create the user. splunk add user foo@bar.com -password changeme -role User 2) Locate the dashboard find /opt/splunk/etc/apps -name <<dashboard>>.xml This will return something like /opt/splunk/etc/apps/<<app name>>/local/data/ui/views/mydashboard.xml 3) Create the user's private app directory. mkdir -p /opt/splunk/etc/users/<<app name>>/local/data/ui/views 4) Move the dashboard to the private directory. mv /opt/splunk/etc/apps/<<app name>>/local/data/ui/views/mydashboard.xml /opt/splunk/etc/user/<<app name>>/local/data/ui/views/mydashboard.xml 5) You may need to restart Splunk. sudo systemctl splunk restart  
Splunk has provision for two mount points: $SPLUNK_HOME (/opt/splunk, by default) and $SPLUNK_DB (/opt/splunk/var/run/splunk by default).  Breaking the file system at other points is possible using l... See more...
Splunk has provision for two mount points: $SPLUNK_HOME (/opt/splunk, by default) and $SPLUNK_DB (/opt/splunk/var/run/splunk by default).  Breaking the file system at other points is possible using links, but doing so is uncommon and not without risks.
This is the full set of process performance metrics that Splunk makes available to us. % Processor Time, % User Time, % Privileged Time, Virtual Bytes Peak, Virtual Bytes, Page Faults/sec, Working S... See more...
This is the full set of process performance metrics that Splunk makes available to us. % Processor Time, % User Time, % Privileged Time, Virtual Bytes Peak, Virtual Bytes, Page Faults/sec, Working Set Peak, Working Set, Page File Bytes Peak, Page File Bytes, Private Bytes, Thread Count, Priority Base, Elapsed Time, ID Process, Creating Process ID, Pool Paged Bytes, Pool Nonpaged Bytes, Handle Count, IO Read Operations/sec, IO Write Operations/sec, IO Data Operations/sec, IO Other Operations/sec, IO Read Bytes/sec, IO Write Bytes/sec, IO Data Bytes/sec, IO Other Bytes/sec, Working Set - Private If you need other metrics, then perhaps there is third-party software available to collect the values and send them to Splunk.
Hi, I installed Splunk in a linux server on /opt/splunk. The server has two disks, one 50 GB (sdb1) and another 6 TB (sda1). I want to save /opt/splunk/var  folder (and all of its contents) of Splun... See more...
Hi, I installed Splunk in a linux server on /opt/splunk. The server has two disks, one 50 GB (sdb1) and another 6 TB (sda1). I want to save /opt/splunk/var  folder (and all of its contents) of Splunk to /splunk/var (sda1) which second huge partition is mounted. Actually I want to separate etc and var in case of partition. etc remain on sdb1 and var be in sda1. I need a detailed solution Thanks
Hi @nachi, upgrade from 7.3.0 to 9.x is a long path because they are a very different products and you have to change the python version, so at first you have to migrate at 8.0.x or 8.1.x, followin... See more...
Hi @nachi, upgrade from 7.3.0 to 9.x is a long path because they are a very different products and you have to change the python version, so at first you have to migrate at 8.0.x or 8.1.x, following the steps  at https://docs.splunk.com/Documentation/Splunk/8.2.1/Installation/HowtoupgradeSplunk then at 8.2.x https://docs.splunk.com/Documentation/Splunk/9.1.0/Installation/HowtoupgradeSplunk then at 9.0 or 9.1 and at least at 9.2 as described at https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/HowtoupgradeSplunk Special attention must be used in App migration because python changed and the old apps could not be compatible with the new version; use the Upgrade readiness app (https://splunkbase.splunk.com/app/5483) to check your apps followinf the documentation at https://docs.splunk.com/Documentation/Splunk/latest/UpgradeReadiness/About?_gl=1*fwtqsv*_ga*MzU3MjIzOTU1LjE3MDA4MDg0NTc.*_ga_GS7YF8S63Y*MTcwODc5MDM2Ni40MTEuMS4xNzA4NzkxMTA1LjM3LjAuMA..*_ga_5EPM2P39FV*MTcwODc5MDMxNC40MjguMS4xNzA4NzkxMTA1LjAuMC4w&_ga=2.18236731.384474630.1707733577-357223955.1700808457 For the apps from Splunkbase, find the new versions compatible with the latest Splunk version. Ciao. Giuseppe  
Hi, We have a single splunk instance(Linux) hosted in AWS. The current version is Splunk entrprise 7.3.0 and we would like to upgrade to 9.x Could someone please help us with the upgrade path and i... See more...
Hi, We have a single splunk instance(Linux) hosted in AWS. The current version is Splunk entrprise 7.3.0 and we would like to upgrade to 9.x Could someone please help us with the upgrade path and instructions.
It looks like you have a transform that employs a regular expression (regex).  This log entry is showing some metrics that result from that regex.  out.splunk would be how much was indexed (unless dr... See more...
It looks like you have a transform that employs a regular expression (regex).  This log entry is showing some metrics that result from that regex.  out.splunk would be how much was indexed (unless dropped by another transform) and out.drop is how much was not indexed.
@Abhigyan_2907 , Assuming that by PCF , you meant Pivotal Cloud Foundry  Without looking at the sample logs, it's difficult to formulate a search to get the logs.  There are different log types fo... See more...
@Abhigyan_2907 , Assuming that by PCF , you meant Pivotal Cloud Foundry  Without looking at the sample logs, it's difficult to formulate a search to get the logs.  There are different log types for applications and based on your requirement , you could search the respective types.  Have a look at this https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html 
I know the index and sourcetype and pcf instances are coming but what to query to fetch each instances event like stating stopped running crashed with the. Timestamp
Apart from what @Richfez said, crcSalt is rarely useful. It's often better to raise the size of the chunk of data used to calculate file's crc (the crcLength option if memory serves me right). And j... See more...
Apart from what @Richfez said, crcSalt is rarely useful. It's often better to raise the size of the chunk of data used to calculate file's crc (the crcLength option if memory serves me right). And just to be on the safe side - where are you putting those transforms? They should not be on the UF but on the first "heavy" component - HF or indexer - in event's path.
You could also go the other way around. Do the nullQueue by default and only send to indexQueue those that _do_ match the timestamp regex.
Hi, We currently have configured Syncsort to send SYSLOG data to Splunk for dashboards. Is it possible to send application data (example ESDS file) to Splunk using Ironstream ?   Thanks.
@Cheng2Ready , Please try this run anywhere example <form version="1.1" theme="light"> <label>DropDown</label> <fieldset submitButton="false"> <input type="dropdown" token="sources" searchW... See more...
@Cheng2Ready , Please try this run anywhere example <form version="1.1" theme="light"> <label>DropDown</label> <fieldset submitButton="false"> <input type="dropdown" token="sources" searchWhenChanged="true"> <label>Sources</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <choice value="D">D</choice> <default>A</default> <initialValue>A</initialValue> <change> <condition value="A"> <set token="searchA">true</set> <unset token="searchB"></unset> <unset token="searchC"></unset> <unset token="searchD"></unset> </condition> <condition value="B"> <set token="searchB">true</set> <unset token="searchA"></unset> <unset token="searchC"></unset> <unset token="searchD"></unset> </condition> <condition value="C"> <set token="searchC">true</set> <unset token="searchA"></unset> <unset token="searchB"></unset> <unset token="searchD"></unset> </condition> <condition value="D"> <set token="searchD">true</set> <unset token="searchA"></unset> <unset token="searchB"></unset> <unset token="searchC"></unset> </condition> </change> </input> </fieldset> <row> <panel depends="$searchA$"> <title>Panel1</title> <table> <title>Search A</title> <search> <query>| makeresults count=5| streamstats count|eval search="SEARCH A REsults - " + count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel depends="$searchA$"> <title>Panel2</title> <html> <a href="https://community.splunk.com/t5/Splunk-Answers/ct-p/en-us-splunk-answers">Splunk Answers</a> </html> </panel> </row> <row> <panel depends="$searchB$"> <title>Panel3</title> <table> <title>Search B</title> <search> <query>| makeresults count=5| streamstats count|eval search="SEARCH B REsults - " + count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel depends="$searchC$"> <title>Panel4</title> <table> <title>Search C</title> <search> <query>| makeresults count=5| streamstats count|eval search="SEARCH C REsults - " + count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel depends="$searchD$"> <title>Panel5</title> <table> <title>Search D</title> <search> <query>| makeresults count=5| streamstats count|eval search="SEARCH D REsults - " + count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>