All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Lets say we have the following data set:   Fruit_ID Fruit_1 Fruit_2 1 Apple NULL 2 Apple NULL 3 Apple NULL 4 Orange NULL 5 Orange NULL 6 Orange NULL 7 Apple Orange 8 Apple Orange 9 Apple... See more...
Lets say we have the following data set:   Fruit_ID Fruit_1 Fruit_2 1 Apple NULL 2 Apple NULL 3 Apple NULL 4 Orange NULL 5 Orange NULL 6 Orange NULL 7 Apple Orange 8 Apple Orange 9 Apple Orange 10 Apple Orange   Now I am trying to count the total amount of every fruit, in the above example it should be 7 apples and 7 oranges, the problem is that these fruits are seperated in 2 different columns because a fruit name can be both an apple AND an orange, how do I deal with this when counting the total amount of fruit? Counting one at a time works: | stats count by Fruit_1 But how do I count both to give a total number since they are 2 seperate columns I tried combining both columns so its all in 1 long list of values in 1 column but I could not get a definitive answer on how to do this. I tried appending results so first count Fruit_1, then append count Fruit_2 but I did not get the right result of Apple: 7 Orange: 7. Its either 1 or the other. Does anybody have a fix for how to count over multiple fields like this and combine the result together in 1 field?
Sorry for missing the details ... The message came from the app, not Splunk itself. Splunk itself is a standalone instance, version 8.1.5, running on a RHEL 8.10 Linux VM. I downloaded the package ... See more...
Sorry for missing the details ... The message came from the app, not Splunk itself. Splunk itself is a standalone instance, version 8.1.5, running on a RHEL 8.10 Linux VM. I downloaded the package from Splunkbase and installed it with "install app from file". Thank you for taking the time!
hi, i currently have this data and i would like to see if i can extract the date and time and see if it can display the LINE if its within the last 24 hours.   example: current time June 19  resul... See more...
hi, i currently have this data and i would like to see if i can extract the date and time and see if it can display the LINE if its within the last 24 hours.   example: current time June 19  result should be:   drwxrwxrwx 2 root root 4.0K Jun 19 06:05 crashinfo   ---------------------- DATA START below ----------------------- /opt/var.dp2/cores/: total 4.0K drwxrwxrwx 2 root root 4.0K Jun 19 06:05 crashinfo /opt/var.dp2/cores/crashinfo: total 0 /var/cores/: total 8.0K drwxrwxrwx 2 root root 4.0K May 28 06:05 crashinfo drwxr-xr-x 2 root root 4.0K May 28 06:05 crashjobs /var/cores/crashinfo: total 0 /var/cores/crashjobs: total 0 /opt/panlogs/cores/: total 0 /opt/var.cp/cores/: total 4.0K drwxr-xr-x 2 root root 4.0K May 28 06:06 crashjobs /opt/var.cp/cores/crashjobs: total 0 /opt/var.dp1/cores/: total 8.0K drwxrwxrwx 2 root root 4.0K May 28 06:05 crashinfo drwxr-xr-x 2 root root 4.0K May 28 06:07 crashjobs /opt/var.dp1/cores/crashinfo: total 0 /opt/var.dp1/cores/crashjobs: total 0 /opt/var.dp0/cores/: total 8.0K drwxrwxrwx 2 root root 4.0K May 28 06:05 crashinfo drwxr-xr-x 2 root root 4.0K May 28 06:07 crashjobs /opt/var.dp0/cores/crashinfo: total 0 /opt/var.dp0/cores/crashjobs: total 0   ---------------------- DATA END above -----------------------
Hi all, We are indexing different topics from our kafka cluster to an index say, index1. But we now have a requirement to retain a subset of those topics for longer period of time. Is there a way to... See more...
Hi all, We are indexing different topics from our kafka cluster to an index say, index1. But we now have a requirement to retain a subset of those topics for longer period of time. Is there a way to implement this while we still get all the data into the same index? I can think of the subset of topics that I need longer retention being searched, filtered out and collected to a new index. But this involves licensing if we want to retain the source/sourcetype and other fields I believe which is not practical for us. We want to retain the original source/sourcetype etc and have the subset of topics that we need longer retention be copied over to another index, say index2, that has longer retention. We also need the original copy in index1 as we have lot of dependant searches and alerts that use this index to search for the same data.    
The server has no direct access to the internet and we only want to open individual URLs that are required to run the updates. So the question is, which URL the content update needs, to load its cont... See more...
The server has no direct access to the internet and we only want to open individual URLs that are required to run the updates. So the question is, which URL the content update needs, to load its content.
Hi @IzI , you have to download and install it. Why these questions, what's the issue? Ciao. Giuseppe
ok, but does it require URL approvals? 
Hi @IzI , ES Content Updates App is an app from Splunkbase that you instal on your Search heads, so it doesn't need any additional firewall route. Ciao. Giuseppe
Hi @karthi2809 , Linux servers can easily send syslogs, that you can receive directly in Splunk or passing throgh rsyslog or syslog-ng server. Anyway I continue to hint to try to convince your cust... See more...
Hi @karthi2809 , Linux servers can easily send syslogs, that you can receive directly in Splunk or passing throgh rsyslog or syslog-ng server. Anyway I continue to hint to try to convince your customer about Universal Forwarders: they are more efficient, secure and you can capture more kinds of logs.. Ciao. Giuseppe
Hi @AL3Z , I know these solutions, but I always hint Splunk training. In addition search videos on YouTube Splunk Channel. Ciao. Giuseppe
Hi @gcusello , It looks like some courses are free, while others require payment. Could you suggest if there are better alternatives on platforms like Coursera or Udemy? Thanks
Please provide more information such as the source of your dashboard
Hi @ITWhisperer  That didn't work unfortunately, gave the following error Set token value to render visualization $form.element$  
Assuming you have a way to uniquely identify your events, you could try something like this: Read current data Set a field to 1 Append previous data (setting field to 2) Sum the field by unique ... See more...
Assuming you have a way to uniquely identify your events, you could try something like this: Read current data Set a field to 1 Append previous data (setting field to 2) Sum the field by unique id Where sum is 3, the id exists in both data sets; where it is 2, it exists previous data set, where is 1 it only exists in current data set.
Hi, which URLs have to be opened in the firewall for the ES Contant Update App? What else may need to be opened in the firewall for the app to work properly?   Regards, Alex
Hi @gcusello  I agree that point but our client is not intrested to install agent and as you mentioned the syslog the application team have multiple logs .So is there any ways to monitor the logs .A... See more...
Hi @gcusello  I agree that point but our client is not intrested to install agent and as you mentioned the syslog the application team have multiple logs .So is there any ways to monitor the logs .And how to onboard syslogs any examples.   Thanks, Karthi
Try something like this | eval selected_total = mvcount($form.element$)
With splunk Stopped please give me the output of netstat -aon|grep 8089 if this shows 8089 is established connection then you will need to disconnect what ever it is and start splunk with splunk use... See more...
With splunk Stopped please give me the output of netstat -aon|grep 8089 if this shows 8089 is established connection then you will need to disconnect what ever it is and start splunk with splunk user again shoudl fix the issue.
can you paste the status of this command $SPLUNK_HOME$/bin/splunk show kvstore-status from indexer.
Hi All, Need some help with SPL query to compare the data from same host on 2 different dates and give me a status as "found" or "not found" . Status = Found if it finds the notepad is still install... See more...
Hi All, Need some help with SPL query to compare the data from same host on 2 different dates and give me a status as "found" or "not found" . Status = Found if it finds the notepad is still installed on same Path on the same machine else not found.   so far I have created a kvstore lookup to store the data but cannot come up with logic to compare the data I have added sample data below. All help is appreciated.   HostNameExeVersion Path ProductName RunDate sourcetype xxxxx null C:\Windows\WinSxS\amd64_microsoft-windows-notepad_31bf3856ad364e35_10.0.19041.3996_none_e397b63725671b86\f\notepad.exe null 2024-06-13 07:41:37 feed xxxxx null C:\Windows\WinSxS\amd64_microsoft-windows-notepad_31bf3856ad364e35_10.0.19041.3996_none_e397b63725671b86\r\notepad.exe null 2024-06-14 07:41:37 feed