All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunkers, I recently noticed an issue while opening dashboards—both default and custom app dashboards—in Splunk. I'm encountering a console error that seems to be related to a JavaScript file na... See more...
Hi Splunkers, I recently noticed an issue while opening dashboards—both default and custom app dashboards—in Splunk. I'm encountering a console error that seems to be related to a JavaScript file named layout.js. The error: Failed to load resource: the server responded with a status of 404 (Not Found) :8000/en-US/splunkd/__raw/servicesNS/sanjai/search/static/appLogo.png:1     The screenshots above are taken from the browser’s developer console—specifically the Network tab. I'm unsure why this is occurring, as I haven’t made any recent changes to the static resources. Has anyone else run into a similar issue? Would appreciate any insights!
<row> <panel> <input type="link" token="needed"> <label>Show missing hosts from:</label> <choice value="link1">Splunk</choice> <choice value="link2">Tanium</choice... See more...
<row> <panel> <input type="link" token="needed"> <label>Show missing hosts from:</label> <choice value="link1">Splunk</choice> <choice value="link2">Tanium</choice> <choice value="link3">Tenable</choice> <choice value="link4">Forescout</choice> <choice value="link5">Fireeye</choice> <choice value="link6">SCCM</choice> <choice value="link7">SentinelOne</choice> <choice value="link8">Multiple Tools (3 or more)</choice> <default>Multiple Tools (3 or more)</default> <change> <condition value="link1"> <set token="show_splunk">true</set> <unset token="show_tanium"></unset> <unset token="show_tenable"></unset> <unset token="show_forescout"></unset> <unset token="show_fireeye"></unset> <unset token="show_sccm"></unset> <unset token="show_sentinelone"></unset> <unset token="show_multiple3"></unset> </condition> <condition value="link2"> <unset token="show_splunk"></unset> <set token="show_tanium"></set> <unset token="show_tenable"></unset> <unset token="show_forescout"></unset> <unset token="show_fireeye"></unset> <unset token="show_sccm"></unset> <unset token="show_sentinelone"></unset> <unset token="show_multiple3"></unset> </condition> <condition value="link3"> <unset token="show_splunk"></unset> <unset token="show_tanium"></unset> <set token="show_tenable"></set> <unset token="show_forescout"></unset> <unset token="show_fireeye"></unset> <unset token="show_sccm"></unset> <unset token="show_sentinelone"></unset> <unset token="show_multiple3"></unset> </condition> <condition value="link4"> <unset token="show_splunk"></unset> <unset token="show_tanium"></unset> <unset token="show_tenable"></unset> <set token="show_forescout"></set> <unset token="show_fireeye"></unset> <unset token="show_sccm"></unset> <unset token="show_sentinelone"></unset> <unset token="show_multiple3"></unset> </condition> <condition value="link5"> <unset token="show_splunk"></unset> <unset token="show_tanium"></unset> <unset token="show_tenable"></unset> <unset token="show_forescout"></unset> <set token="show_fireeye"></set> <unset token="show_sccm"></unset> <unset token="show_sentinelone"></unset> <unset token="show_multiple3"></unset> </condition> <condition value="link6"> <unset token="show_splunk"></unset> <unset token="show_tanium"></unset> <unset token="show_tenable"></unset> <unset token="show_forescout"></unset> <unset token="show_fireeye"></unset> <set token="show_sccm"></set> <unset token="show_sentinelone"></unset> <unset token="show_multiple3"></unset> </condition> <condition value="link7"> <unset token="show_splunk"></unset> <unset token="show_tanium"></unset> <unset token="show_tenable"></unset> <unset token="show_forescout"></unset> <unset token="show_fireeye"></unset> <unset token="show_sccm"></unset> <set token="show_sentinelone"></set> <unset token="show_multiple3"></unset> </condition> <condition value="link8"> <unset token="show_splunk"></unset> <unset token="show_tanium"></unset> <unset token="show_tenable"></unset> <unset token="show_forescout"></unset> <unset token="show_fireeye"></unset> <unset token="show_sccm"></unset> <unset token="show_sentinelone"></unset> <set token="show_multiple3"></set> </condition> </change> </input> <table depends="$show_splunk$"> <title>Splunk Missing Hosts</title> <search base="asset_explorer"> <query>| table hostname Splunk_LastSeen | where Splunk_LastSeen = "X"</query> </search> <option name="drilldown">none</option> </table> <table depends="$show_tanium$"> <title>Tanium Missing Hosts</title> <search base="asset_explorer"> <query>| table hostname Tanium_LastSeen | where Tanium_LastSeen = "X"</query> </search> <option name="drilldown">none</option> </table> <table depends="$show_tenable$"> <title>Tenable Missing Hosts</title> <search base="asset_explorer"> <query>| table hostname Tenable_LastSeen | where Tenable_LastSeen = "X"</query> </search> <option name="drilldown">none</option> </table> <table depends="$show_forescout$"> <title>Forescout Missing Hosts</title> <search base="asset_explorer"> <query>| table hostname Forescout_LastSeen | where Forescout_LastSeen = "X"</query> </search> <option name="drilldown">none</option> </table> <table depends="$show_fireeye$"> <title>Fireeye Missing Hosts</title> <search base="asset_explorer"> <query>| table hostname FireEye_LastSeen | where FireEye_LastSeen = "X"</query> </search> <option name="drilldown">none</option> </table> <table depends="$show_sccm$"> <title>SCCM Missing Hosts</title> <search base="asset_explorer"> <query>| table hostname Sccm_LastSeen | where Sccm_LastSeen = "X"</query> </search> <option name="drilldown">none</option> </table> <table depends="$show_sentinelone$"> <title>SentinelOne Missing Hosts</title> <search base="asset_explorer"> <query>| table hostname SentinelOne_LastSeen | where SentinelOne_LastSeen = "X"</query> </search> <option name="drilldown">none</option> </table> <table depends="$show_multiple3$"> <title>3 Tools or More Missing</title> <search base="asset_explorer"> <query>| where Missing_Tools >= 3 | table hostname Splunk_LastSeen Tanium_LastSeen Tenable_LastSeen Forescout_LastSeen FireEye_LastSeen Sccm_LastSeen Missing_Tools | sort 0 -Missing_Tools +hostname</query> </search> <option name="drilldown">none</option> </table> </panel> </row> Thanks!
Hello PickleRick, Apologies: I had the wrong impression. The events were not really gone: I deleted them myself, hoping that they would be reindexed immediately with the correct _time, as they did i... See more...
Hello PickleRick, Apologies: I had the wrong impression. The events were not really gone: I deleted them myself, hoping that they would be reindexed immediately with the correct _time, as they did in the previous trials. This time however, after deleting them from the indexer they were not reindexed automatically, so I incorrectly used the word "gone". So I added a crcSalt temporarily in the inputs.conf of the forwarder, all events were reindexed with the correct _time and everything worked perfectly thanks to your suggestion. Many thanks again for your solution
Splunk UF can read a CSV file but cannot as lookup. UF recognize it as regular log file. and no splunk does not automatically upload it as lookup. you might consider using splunk UF to monitor the C... See more...
Splunk UF can read a CSV file but cannot as lookup. UF recognize it as regular log file. and no splunk does not automatically upload it as lookup. you might consider using splunk UF to monitor the CSV file. create a monitoring stanza: [monitor://path...] sourcetype= index= and then set the props.conf on the indexer with this setting: [theCSVfile] INDEXED_EXTRACTION=csv
hello, first and formost, always start with creating a backup: 1- create a backup of $SPLUNK_HOME/etc of every single splunk server you have 2- start with stand alone servers and move on to clus... See more...
hello, first and formost, always start with creating a backup: 1- create a backup of $SPLUNK_HOME/etc of every single splunk server you have 2- start with stand alone servers and move on to clusters 3- grab wget command from splunk.com 4- if you choose to use rpm,      - stop splunk : /opt/splunk/bin/splunk stop     - run: rpm -U splunk_package.... 5-start splunk: /opt/splunk/bin/splunk start --accept-license 6- check if everything went well: tail -f /opt/splunk/var/log/splunk/splunkd.log
hello, it looks like both add-ons are archived. Consider creating a script if not, http event collector HEC will be your best bet. as far as I know Jira support HEC.
The DATETIME_CONFIG setting has nothing to do with already indexed data so something else must have happened if you "lost" your events.
Main problem with this limitation is the lack of feedback about this truncation. Dashboard users will wrongly assume all entries are in the list.
@deepdiver From all messages present here it appears something around KVStore here you can check KVStore status first using below command: ./splunk show kvstore-status If its up and running the... See more...
@deepdiver From all messages present here it appears something around KVStore here you can check KVStore status first using below command: ./splunk show kvstore-status If its up and running then its good otherwise you have to check first Splunkd logs to get the exact ERROR logs.
Hi Splunkers! I'm currently working on a project where the goal is to visualize various KPIs in Splunk based on Jira ticketing data. In our setup, Jira is managed separately and contains around ... See more...
Hi Splunkers! I'm currently working on a project where the goal is to visualize various KPIs in Splunk based on Jira ticketing data. In our setup, Jira is managed separately and contains around 10 different projects. I'm considering multiple integration methods: Using Jira Issue Input Add-on Using Splunk for JIRA Add-on Before diving in, I'd love to hear from your experiences: Which method do you recommend for efficient and reliable integration? Any specific limitations or gotchas I should be aware of? Is the Jira Issue Input Add-on scalable enough for multiple Jira projects? Thanks in advance for your insights!
Ok I quickly managed to reindex the file through a temporary crcSalt in the forwarder inputs.conf. The events are now indexed fine thanks to removing the DATETIME_CONFIG=NONE  Many thanks to Pickle... See more...
Ok I quickly managed to reindex the file through a temporary crcSalt in the forwarder inputs.conf. The events are now indexed fine thanks to removing the DATETIME_CONFIG=NONE  Many thanks to PickleRick for providing the solution.
Ok, I removed the: DATETIME_CONFIG=NONE But now all events are gone. I also tried to empty the index: /opt/splunk/bin/splunk clean eventdata -index lts This action will permanently erase all even... See more...
Ok, I removed the: DATETIME_CONFIG=NONE But now all events are gone. I also tried to empty the index: /opt/splunk/bin/splunk clean eventdata -index lts This action will permanently erase all events from the index 'lts'; it cannot be undone. Are you sure you want to continue [y/n]? y Cleaning database lts. And restarted both the indexer and the forwarder. But no events are sent.
Hi Romedawg, We're currently encountering the same issues, could you be able to elaborate a bit more on your solution? Specifically, what command did you use to generate the CA/self-signed certific... See more...
Hi Romedawg, We're currently encountering the same issues, could you be able to elaborate a bit more on your solution? Specifically, what command did you use to generate the CA/self-signed certificate? You're making a server.pep file, but do you use that elsewhere in your setup? Also, in the second part of your post, you run an openssl verify on server.crt. Could you clarify where that file comes from? Is it the same one referenced in your sslconfig stanza, server.pem? Or is this another file entirely?  
You can't. Some things simply take time.
Hi @danielbb , as also the other said: you should have two different ServerClasses (if you have a Deployment Server) or two distribution lists if you use anotehr tool. I don't like the solution to ... See more...
Hi @danielbb , as also the other said: you should have two different ServerClasses (if you have a Deployment Server) or two distribution lists if you use anotehr tool. I don't like the solution to hardcode a rule in your script, because you have to remember this configuration for all the next time and to manage it! Ciao. Giuseppe
Hi @Cheng2Ready , you run your alert Monday-Friday, and you filter your results using the above search in this way you will not have results in those days so the alert will not fire. Ciao. Giuseppe
Hi @MatheoCaneva1 , if you have 3 Indexers (IDX), 1 Search Head (SH), 1 Heavy Forwarder (HF) and server with many roles: you should check this last one is also the Cluster Manager, in other words, i... See more...
Hi @MatheoCaneva1 , if you have 3 Indexers (IDX), 1 Search Head (SH), 1 Heavy Forwarder (HF) and server with many roles: you should check this last one is also the Cluster Manager, in other words, if you have an Indexer Cluster, even if it's strange that you don't know if you have it! You can check this accessing this server and viewing in [Settings > Indexer Cluster]: in this dashboard, you can see if you have an Indexer Cluster and its status. About the Search Head Cluster, you surely haven't it because you have only one SH (at least three SHs are required!). The SHCD is the Search Head Cluster Deployer, a machine delegated to manage Search Head Clusters, but you haven't a Search Head Cluster so you haven't it. Distributed Search isn't a Splunk role, probably you mean Deployment Server, to manage Forwarders and eventually Search Heads (if you haven't a Cluster). Summarizing: if you have an Indexer Cluster, you have to upgrade your servers following this order: Cluster Manager (that's also DS, LM, MC) SH IDX, HF UF If you haven't an Indexer Cluster: SH IDX, DS, LM, MC,  HF UF At least I hint to read this document that describes Splunk Architectures, to understand your one: https://docs.splunk.com/Documentation/SVA/current/Architectures/About  Ciao. Giuseppe
How can I shorten the time? I have more than 20 servers.
This is confusing: how do you get those "hard coded" text in the first place? In Splunk, the opposite is harder, rendering system date into English word strings. But if you got those strings in some... See more...
This is confusing: how do you get those "hard coded" text in the first place? In Splunk, the opposite is harder, rendering system date into English word strings. But if you got those strings in some dataset, you sure can "translate" them back.  Suppose your hard coded input is called hardcoded, this search will turn the string into systemdate: | eval decrement = case( hardcoded == "Today", 0, hardcoded == "Yesterday", 1, true(), replace(hardcoded, "Last (\d+).+", "\1") ) | eval systemdate = strftime(relative_time(now(), "-" . decrement . "day"), "%F") decrement hardcoded systemdate 0 Today 2025-04-24 1 Yesterday 2025-04-23 2 Last 2nd Day 2025-04-22 3 Last 3rd Day 2025-04-21 4 Last 4th Day 2025-04-20 5 Last 5th Day 2025-04-19 Here is a full emulation for you to play with and compare with real data. | makeresults format=csv data="hardcoded Today Yesterday Last 2nd Day Last 3rd Day Last 4th Day Last 5th Day" | eval decrement = case( hardcoded == "Today", 0, hardcoded == "Yesterday", 1, true(), replace(hardcoded, "Last (\d+).+", "\1") ) | eval systemdate = strftime(relative_time(now(), "-" . decrement . "day"), "%F")
If you are having the same issue next week, I will be in an environment that I can help better.  I'm currently going off of memory.  Please let me know if you still need help on Monday and I can help... See more...
If you are having the same issue next week, I will be in an environment that I can help better.  I'm currently going off of memory.  Please let me know if you still need help on Monday and I can help troubleshoot better.  Sorry I can't think of anything else to suggest right now.