All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @PickleRick , 1. You totally right. As far as I understood, this server is for begginers use, lab etc.  2. That's right but in the past I tried to clean some of those directories in my test envi... See more...
Hi @PickleRick , 1. You totally right. As far as I understood, this server is for begginers use, lab etc.  2. That's right but in the past I tried to clean some of those directories in my test environment and it didn't end well  What is the best practice ?  I thought I could change the minFreeSpace=5000 variable to 4000 for example ("Pause indexing if free disk space (in MB) falls below") I'm reading some post from the community but I'm not finding a clear answer about reducing the retention time or deleting colddb... I really don't want to mess things up and if you have a link to the documentation for my problem I'd appreciate it   
1. This is not a big server. 2. You seem to have much data in /var (almost as much as your whole Splunk installation). Typically for a Splunk server /var would only contain last few days/weeks of lo... See more...
1. This is not a big server. 2. You seem to have much data in /var (almost as much as your whole Splunk installation). Typically for a Splunk server /var would only contain last few days/weeks of logs and probably some cache from the package manager.  This is up to you to diagnose what is eating up your space there.
Please share your current searches and some sample events, and what your expected result would look like (anonymised of course)
Hello @PickleRick and @richgalloway  Thank you both for your answers ! First I will say this infrastructure is not mine but now I need to optimize it. Of course I know we can't fill the disk to th... See more...
Hello @PickleRick and @richgalloway  Thank you both for your answers ! First I will say this infrastructure is not mine but now I need to optimize it. Of course I know we can't fill the disk to the brim and I'm trying to perfectly understand the impact of my futur actions. @PickleRick I'm not quite sure of what uses up my space since I'm not an expert Splunk yet haha Here is what I found :  When I run the command : sudo du -sh /* So, do I need to reduce the retention time of maybe something is using more space than it should be ?  Have a nice day all !           
Sorry, but it doesn't work. | makeresults | eval data =split("10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a=somethi... See more...
Sorry, but it doesn't work. | makeresults | eval data =split("10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:21:35 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:21:36 25/Jan/2024 id=2 a=something2 b=11 c=12 ... 10:22:56 25/Jan/2024 id=2 a=xyz b=- c=385 ...", " ") | mvexpand data | rename data as _raw | extract | rex "(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%H:%M:%S %d/%b/%Y") | stats max(_time) as _time values(*) as * by id | foreach * [eval changed = mvappend(changed, if(mvcount(<<FIELD>>) > 1, "changed field \"<<FIELD>>\"", null()))] | table _time changed | eval changed = mvjoin(changed, ", ") It outputs _time changed 2024-01-25 10:20:56 changed field "c" 2024-01-25 10:22:56 changed field "a", changed field "b", changed field "c"   Which is definitely not what happened in data. Firstly, we don't know which id we're talking about, secondly, at 10:20:56 there could have been no change since it's our first data point. There is no change reported at all as 10:21:36... "Normal" stats is _not_ the way to go to find moments of change. It can be a way to find if a field changed at all throughout the sample but not when id did change. You need to use streamstats (or autoregress and clever sorting) to get the value from the previous event to have something to compare it with. Otherwise you have only the overall aggregation, not "running changes".
Hi, I have  database1 and database2,  I have query1 to get the data from database1 and query2 to get data from database2. query3 to get unique values from databse2 which doesn't exist in database1. ... See more...
Hi, I have  database1 and database2,  I have query1 to get the data from database1 and query2 to get data from database2. query3 to get unique values from databse2 which doesn't exist in database1. Now my requirement is to combine the common values in both the databases using a query1 & query2 and also unique values from query2 from database2 which doesn't exist in database1. Please provide me the Splunk query.
OK so what do those events look like? What data do they contain? Please share some anonymised examples.
Hi, Thanks for your reply. We have a WAF and firewall and ingest their logs in Splunk. Regards,    
Hi, is this resolved? if yes, how did you do it?
Dear Team, Is it possible to join a Splunk license server through proxies? I found this but I don't know if it applies to this context: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Con... See more...
Dear Team, Is it possible to join a Splunk license server through proxies? I found this but I don't know if it applies to this context: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/ConfigureSplunkforproxy   Regards,      
Hi @chakavak, it's correct and it should be sufficient. Anyway, please add in $SPLUNK_HOME\etc\system\local the inpus.conf file containing the following stanza: [default] host = mydashboard and r... See more...
Hi @chakavak, it's correct and it should be sufficient. Anyway, please add in $SPLUNK_HOME\etc\system\local the inpus.conf file containing the following stanza: [default] host = mydashboard and restart Splunk on the Universal Forwarder. Ciao. Giuseppe
[general] serverName = mydashboard pass4SymmKey = $7$Jte1qcrLi+3xY2ipx1brJChXbKmr+9ZYKthpA0Edywk92IjolIKAEg== [sslConfig] sslPassword = $7$+6pIzsRauFB5hevEHOxTpjcV3OW9bakXS9oFXZYydFHaX98N1irSjg==... See more...
[general] serverName = mydashboard pass4SymmKey = $7$Jte1qcrLi+3xY2ipx1brJChXbKmr+9ZYKthpA0Edywk92IjolIKAEg== [sslConfig] sslPassword = $7$+6pIzsRauFB5hevEHOxTpjcV3OW9bakXS9oFXZYydFHaX98N1irSjg== [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free      
Hi @chakavak , could you share your $SPLUNK_HOME\etc/system\local\server.conf ? Ciao. Giuseppe
Hi @Shihua , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Excuse me, can you tell me how to use calculated field for renaming host (for example change "WIN-KLV1NNUJO8P" to "mydashboard"? I'm new to splunk and learning
If you have the episodeID, you can link directly to it: https://YOURSPLUNKSERVER:8000/en-US/app/itsi/itsi_event_management?earliest=-7d%40h&latest=now&form.earliest_time=-7d%40h&form.latest_time=n... See more...
If you have the episodeID, you can link directly to it: https://YOURSPLUNKSERVER:8000/en-US/app/itsi/itsi_event_management?earliest=-7d%40h&latest=now&form.earliest_time=-7d%40h&form.latest_time=now&episodeid=YOUREPISODEID Please be aware of the time span, if episode is older than 7d it won't be found because in THIS link -7d is set.
You could do something like this: | makeresults format=json data="[{ \"iphone\": { \"price\" : \"50\", \"review\" : \"Good\" }, \"desktop\": { \"price\" : \"80\"... See more...
You could do something like this: | makeresults format=json data="[{ \"iphone\": { \"price\" : \"50\", \"review\" : \"Good\" }, \"desktop\": { \"price\" : \"80\", \"review\" : \"OK\" }, \"laptop\": { \"price\" : \"90\", \"review\" : \"OK\" } },{ \"tv\": { \"price\" : \"50\", \"review\" : \"Good\" }, \"desktop\": { \"price\" : \"60\", \"review\" : \"OK\" } }]" | fields _raw _time | eval p_name=json_array_to_mv(json_keys(_raw)) | streamstats count as row | eval flag = pow(2, row - 1) | mvexpand p_name | eval {p_name}=flag | fields - flag row p_name | stats sum(*) as * Fields with 1 are only in the first event, fields with 2 are only in the second event (missing from the first event), and fields with 3 are in both events. This also works for more events as the sums are essentially binary flags for which events the fields come from, e.g. for 3 events, 7 would be all event, 5 would be first and third. etc.
Yes, I restarted the SplunkForwarder service
Hi @chakavak, outputs.con must not be changed! did you restarted Splunk on the UF after change? Ciao. Giuseppe
Hi @gcusello Thank you for your reply, I changed the hostname in server.conf, but in forwarder inputs.conf not there in the mentioned path, I have outputs.conf!!!! It also doesn't work when I just ... See more...
Hi @gcusello Thank you for your reply, I changed the hostname in server.conf, but in forwarder inputs.conf not there in the mentioned path, I have outputs.conf!!!! It also doesn't work when I just change the server.conf file.