All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello. I am a Splunk newbie. I have a question about the replication factor in searchhead clustering. Looking at the docs it says that search artifacts are only replicated for scheduled saved sea... See more...
Hello. I am a Splunk newbie. I have a question about the replication factor in searchhead clustering. Looking at the docs it says that search artifacts are only replicated for scheduled saved searches. https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/ChooseSHCreplicationfactor   I'm curious as to the reason and advantage of duplicating search artifacts only in this case. And, then, in the case of real-time search, is it correct that search artifacts are not replicated and only remain on the local server? In that case, in a clustering environment, member 2 should not be able to see the search results of member 1. But I can view it by using the loadjob command in member2. Then, wouldn’t it be possible to view real-time search artifacts as well? Thank you
yes, the _time is not the time of change, I noticed it too. But overall the code reports summarized all changes per id: | table _time id changed The first data point is at 10:20:30, so the reported... See more...
yes, the _time is not the time of change, I noticed it too. But overall the code reports summarized all changes per id: | table _time id changed The first data point is at 10:20:30, so the reported change at 10:20:56 is correct. I would be very interested in a solution involving "running changes". BTW never heard about the "autoregress" command, thx!
Hello Team, We have deployed machine agent as an  side car(different container within a pod) for  apache in OSE. It's working for most of the pod but for one pod we are getting below error. code-ex... See more...
Hello Team, We have deployed machine agent as an  side car(different container within a pod) for  apache in OSE. It's working for most of the pod but for one pod we are getting below error. code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:14,654 DEBUG RegistrationTask - Encountered error during registration. com.appdynamics.voltron.rest.client.NonRestException: Method: SimMachinesAgentService#registerMachine(SimMachineMinimalDto) - Result: 401 Unauthorized - content:   at com.appdynamics.voltron.rest.client.VoltronErrorDecoder.decode(VoltronErrorDecoder.java:62) ~[rest-client-1.1.0.245.jar:?] at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:156) ~[feign-core-10.7.4.jar:?] at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:80) ~[feign-core-10.7.4.jar:?] at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100) ~[feign-core-10.7.4.jar:?] at com.sun.proxy.$Proxy114.registerMachine(Unknown Source) ~[?:?] at com.appdynamics.agent.sim.registration.RegistrationTask.run(RegistrationTask.java:147) [machineagent.jar:Machine Agent v23.9.1.3731 GA compatible with 4.4.1.0 Build Date 2023-09-20 05:14:38] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,189 DEBUG GlobalTagsConfigsDecider - Global tags enabled: false code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,189 DEBUG RegistrationTask - Running registration task code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,256  WARN RegistrationTask - Encountered error during registration. Will retry in 60 seconds. code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,256 DEBUG RegistrationTask - Encountered error during registration.   We have cross-verified and everything looks good from the configuration end.    Kindly help us with your suggestions.
WAF and firewall are typically _not_ solutions associated with email traffic or user's web-related behaviour so you might want to reconsider your sources list.
Hi Team, We have opted for 250 GB of licensing on daily basis.  So if the license is reaching more than 70% (i.e. 175 GB) i need to get an alert similarly if the license is getting reached 80% and m... See more...
Hi Team, We have opted for 250 GB of licensing on daily basis.  So if the license is reaching more than 70% (i.e. 175 GB) i need to get an alert similarly if the license is getting reached 80% and more (i.e. 200 GB) then i need to get another alert. And finally if it crossed more than 90% (i.e. 225 GB) i need to get another alert.   So can you help me with the Search query.
1. For beginners use, it's _probably_ ok to have relatively short retention. If you have sources that contiously supply your environment with evetns. Otherwise you might want to keep your data for lo... See more...
1. For beginners use, it's _probably_ ok to have relatively short retention. If you have sources that contiously supply your environment with evetns. Otherwise you might want to keep your data for longer in case you want your users to have some material to search from. Noone can tell you what your case is. If it's an all-in-one server (which I assume it is) the index settings are in the settings menu under... surprise, surprise, "Indexes". 2. That's completely out of scope of Splunk administration as such and is a case for your Linux admins. We have no idea what is installed on your server, what is its configuration and why it is configured this way. So we can tell you that "you have way too much here for the server's size" and you have to deal with it. And yes, removing random directories is not a very good practice.
Hi @PickleRick , 1. You totally right. As far as I understood, this server is for begginers use, lab etc.  2. That's right but in the past I tried to clean some of those directories in my test envi... See more...
Hi @PickleRick , 1. You totally right. As far as I understood, this server is for begginers use, lab etc.  2. That's right but in the past I tried to clean some of those directories in my test environment and it didn't end well  What is the best practice ?  I thought I could change the minFreeSpace=5000 variable to 4000 for example ("Pause indexing if free disk space (in MB) falls below") I'm reading some post from the community but I'm not finding a clear answer about reducing the retention time or deleting colddb... I really don't want to mess things up and if you have a link to the documentation for my problem I'd appreciate it   
1. This is not a big server. 2. You seem to have much data in /var (almost as much as your whole Splunk installation). Typically for a Splunk server /var would only contain last few days/weeks of lo... See more...
1. This is not a big server. 2. You seem to have much data in /var (almost as much as your whole Splunk installation). Typically for a Splunk server /var would only contain last few days/weeks of logs and probably some cache from the package manager.  This is up to you to diagnose what is eating up your space there.
Please share your current searches and some sample events, and what your expected result would look like (anonymised of course)
Hello @PickleRick and @richgalloway  Thank you both for your answers ! First I will say this infrastructure is not mine but now I need to optimize it. Of course I know we can't fill the disk to th... See more...
Hello @PickleRick and @richgalloway  Thank you both for your answers ! First I will say this infrastructure is not mine but now I need to optimize it. Of course I know we can't fill the disk to the brim and I'm trying to perfectly understand the impact of my futur actions. @PickleRick I'm not quite sure of what uses up my space since I'm not an expert Splunk yet haha Here is what I found :  When I run the command : sudo du -sh /* So, do I need to reduce the retention time of maybe something is using more space than it should be ?  Have a nice day all !           
Sorry, but it doesn't work. | makeresults | eval data =split("10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a=somethi... See more...
Sorry, but it doesn't work. | makeresults | eval data =split("10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:21:35 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:21:36 25/Jan/2024 id=2 a=something2 b=11 c=12 ... 10:22:56 25/Jan/2024 id=2 a=xyz b=- c=385 ...", " ") | mvexpand data | rename data as _raw | extract | rex "(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%H:%M:%S %d/%b/%Y") | stats max(_time) as _time values(*) as * by id | foreach * [eval changed = mvappend(changed, if(mvcount(<<FIELD>>) > 1, "changed field \"<<FIELD>>\"", null()))] | table _time changed | eval changed = mvjoin(changed, ", ") It outputs _time changed 2024-01-25 10:20:56 changed field "c" 2024-01-25 10:22:56 changed field "a", changed field "b", changed field "c"   Which is definitely not what happened in data. Firstly, we don't know which id we're talking about, secondly, at 10:20:56 there could have been no change since it's our first data point. There is no change reported at all as 10:21:36... "Normal" stats is _not_ the way to go to find moments of change. It can be a way to find if a field changed at all throughout the sample but not when id did change. You need to use streamstats (or autoregress and clever sorting) to get the value from the previous event to have something to compare it with. Otherwise you have only the overall aggregation, not "running changes".
Hi, I have  database1 and database2,  I have query1 to get the data from database1 and query2 to get data from database2. query3 to get unique values from databse2 which doesn't exist in database1. ... See more...
Hi, I have  database1 and database2,  I have query1 to get the data from database1 and query2 to get data from database2. query3 to get unique values from databse2 which doesn't exist in database1. Now my requirement is to combine the common values in both the databases using a query1 & query2 and also unique values from query2 from database2 which doesn't exist in database1. Please provide me the Splunk query.
OK so what do those events look like? What data do they contain? Please share some anonymised examples.
Hi, Thanks for your reply. We have a WAF and firewall and ingest their logs in Splunk. Regards,    
Hi, is this resolved? if yes, how did you do it?
Dear Team, Is it possible to join a Splunk license server through proxies? I found this but I don't know if it applies to this context: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Con... See more...
Dear Team, Is it possible to join a Splunk license server through proxies? I found this but I don't know if it applies to this context: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/ConfigureSplunkforproxy   Regards,      
Hi @chakavak, it's correct and it should be sufficient. Anyway, please add in $SPLUNK_HOME\etc\system\local the inpus.conf file containing the following stanza: [default] host = mydashboard and r... See more...
Hi @chakavak, it's correct and it should be sufficient. Anyway, please add in $SPLUNK_HOME\etc\system\local the inpus.conf file containing the following stanza: [default] host = mydashboard and restart Splunk on the Universal Forwarder. Ciao. Giuseppe
[general] serverName = mydashboard pass4SymmKey = $7$Jte1qcrLi+3xY2ipx1brJChXbKmr+9ZYKthpA0Edywk92IjolIKAEg== [sslConfig] sslPassword = $7$+6pIzsRauFB5hevEHOxTpjcV3OW9bakXS9oFXZYydFHaX98N1irSjg==... See more...
[general] serverName = mydashboard pass4SymmKey = $7$Jte1qcrLi+3xY2ipx1brJChXbKmr+9ZYKthpA0Edywk92IjolIKAEg== [sslConfig] sslPassword = $7$+6pIzsRauFB5hevEHOxTpjcV3OW9bakXS9oFXZYydFHaX98N1irSjg== [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free      
Hi @chakavak , could you share your $SPLUNK_HOME\etc/system\local\server.conf ? Ciao. Giuseppe
Hi @Shihua , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated