All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you! Just like that it works and only in 1 line
| eval fruit=mvappend(fruit1,if(fruit2!="NULL",fruit2,null())) | stats count by fruit
A little update: I now got to a point where I have the following situation: Fruit_1 count Fruit_2 count Apple 5 null null Orange 10 null null Pear 5 Apple 5 Melon 10 Orange 10 How do I get i... See more...
A little update: I now got to a point where I have the following situation: Fruit_1 count Fruit_2 count Apple 5 null null Orange 10 null null Pear 5 Apple 5 Melon 10 Orange 10 How do I get it so that the amount of apples and oranges from column Fruit_1 and Fruit_2 are combined into 1 big fruit list and 1 count list? So the result should be: Fruit Count Apple 10 Orange 20 Pear 5 Melon 5
Hi @richgalloway ,even TCP connection is setup to the indexer and its port. No firewall blocking as well but still no events being returned on search. 
Hi All, We have Splunk Security ENT 6.6.2 - EOL, I know! our admins guys are working on upgrading. My Problem. We created 2 new user groups. Team A and Team B We gave Team A - Total access to dat... See more...
Hi All, We have Splunk Security ENT 6.6.2 - EOL, I know! our admins guys are working on upgrading. My Problem. We created 2 new user groups. Team A and Team B We gave Team A - Total access to data in half the indexes. Role restrictions on indexes We gave Team B - Total access to data in the other half the indexes. Role restrictions on indexes The outcome was as expected, Team A can only see data from indexes for their role and likewise for Team B. This is where we have a problem, Both Teams need to user the Incident Review Dashboard and Both teams need to assign notable events to users within their own Team. As Owners. However, they cannot, and the system gives errors. If we take the role restriction off. So both teams can see all Data. Then they can assign notable events. Our internal Splunk admin, say it is a bug in this version and the system needs to be upgraded. My questions, Has anyone experienced similar?  Is there a bug and if so, any reference that can be found on the bug? Are there any workarounds regarding this problem? We have 2 teams that need to use the Incident Review to respond to alerts. However, these teams need to be independent and should not be able to see data within indexes that belongs to the other Team. Thanks for any advise.
I have a few questions on how splunk sees and displays the license warning counts. Yes if you go over your pool size then that equals a warning count. However, several instances I see some conflictin... See more...
I have a few questions on how splunk sees and displays the license warning counts. Yes if you go over your pool size then that equals a warning count. However, several instances I see some conflicting information like when I add a new license that is bigger than the previous one, I would think the warning count would reset but it doesn’t. I also have a search that looks at the license usage.log and shows me how many times I have went over my size in the last 30 days. This also has different counts than what is shown in the warning count section. The final weird issue I see is when I had a sever warning count at 44 but a week later within any changes, the number decreased to 37. What’s causes so many different numbers with the Splunk licenses
since moving to 9.2.1, now my df.sh events are now a single event when searching. also notice the format is bad when running the script compared to the built in df. novice linux guy here looking to s... See more...
since moving to 9.2.1, now my df.sh events are now a single event when searching. also notice the format is bad when running the script compared to the built in df. novice linux guy here looking to see if anyone else has come across this. thanks! splunk df     linux df         splunk event
Hello all, I need to configure SAML/SSO with Splunk but i m having the following issues: - I have 3 search heads in a cluster (without a load balancer )    => I can create a dedicated SAML confi... See more...
Hello all, I need to configure SAML/SSO with Splunk but i m having the following issues: - I have 3 search heads in a cluster (without a load balancer )    => I can create a dedicated SAML config for each search head and disable the replication of the authentication.conf - we have many tenants and we have users connecting from the different tenants to Splunk (currently we have multiple LDAP configurations)  => I understood that Splunk only accepts one IdProvider with SAML, so users from other tenants will not be able to access splunk with SSO. - ideally, we must have some users connecting with LDAP, but Splunk doesn't allow enabling both LDAP and SAML simultaneously  or it is possible but requires a custom script for that. Questions: 1-  does anyone have worked on a script to enable LDAP and SAML ?  2- Any idea about the best config from Azure ID regarding the multi-tenants and the B2B collaboration? 3- Any advice in general how to better approach this issue?  Best  
Thank @harsmarvania57, my bad. It worked as well. I want to write another script that used Splunk SDK, which does not depend on Splunk lib, or must run the script in Splunk server.  Any way, I near... See more...
Thank @harsmarvania57, my bad. It worked as well. I want to write another script that used Splunk SDK, which does not depend on Splunk lib, or must run the script in Splunk server.  Any way, I nearly finished with my the script by using SDK.  Thank for your help and your script helped me a lot!
thank you very much. it works.
Try this ``` Parse the date ``` | rex "\s(?<date>\w{3}\s\d{1,2})\s" ``` Convert the date into epoch form ``` | eval epoch=strptime(date, "%b %d") ``` See if the date falls in the last 24 hours ``` |... See more...
Try this ``` Parse the date ``` | rex "\s(?<date>\w{3}\s\d{1,2})\s" ``` Convert the date into epoch form ``` | eval epoch=strptime(date, "%b %d") ``` See if the date falls in the last 24 hours ``` | where epoch > relative_time(now(), "-24h")
Hi @glc_slash_it , here it is. Although I am not getting the specific lines, instead the whole log is getting indexed.   transforms.conf [err_line] REGEX = ^(?!.error) DEST_KEY = _MetaData:Index FO... See more...
Hi @glc_slash_it , here it is. Although I am not getting the specific lines, instead the whole log is getting indexed.   transforms.conf [err_line] REGEX = ^(?!.error) DEST_KEY = _MetaData:Index FORMAT = error_idx props.conf [err_src] TRANSFORMS-err_line = err_line    
Try something like this | rex max_match=0 "(?m)^(\S+ ){5}(?<datetimefile>\w+ +\d+\s+\d+:\d+\s+\S+)$" | mvexpand datetimefile | eval timestamp=strptime(datetimefile,"%b %d %H:%M") | where now()-times... See more...
Try something like this | rex max_match=0 "(?m)^(\S+ ){5}(?<datetimefile>\w+ +\d+\s+\d+:\d+\s+\S+)$" | mvexpand datetimefile | eval timestamp=strptime(datetimefile,"%b %d %H:%M") | where now()-timestamp < 24*60*60
unfortunately it still breaks into two events and I wanted to receive only 1 event: Time Event 1 6/14/24 7:56:39.168 AM         "TimeStamp":  "\/Date(1718366199168)\/",         "ID":  7082,    ... See more...
unfortunately it still breaks into two events and I wanted to receive only 1 event: Time Event 1 6/14/24 7:56:39.168 AM         "TimeStamp":  "\/Date(1718366199168)\/",         "ID":  7082,         "Parameters":  null,     {     }, Show all 6 lines   ------------------------------------------------ 2 6/14/24 7:56:39.013 AM         "SplunkTime":  "1718366199.01303",         "Source3":  null,         "Source2":  null,         "Source1":  null,         "ProcessPIUser":  null, Show all 15 lines
For context: this question is regarding use cases/user stories for Splunk. A use case can be linked to multiple user stories, and I want to count the amount of total user stories.
Lets say we have the following data set:   Fruit_ID Fruit_1 Fruit_2 1 Apple NULL 2 Apple NULL 3 Apple NULL 4 Orange NULL 5 Orange NULL 6 Orange NULL 7 Apple Orange 8 Apple Orange 9 Apple... See more...
Lets say we have the following data set:   Fruit_ID Fruit_1 Fruit_2 1 Apple NULL 2 Apple NULL 3 Apple NULL 4 Orange NULL 5 Orange NULL 6 Orange NULL 7 Apple Orange 8 Apple Orange 9 Apple Orange 10 Apple Orange   Now I am trying to count the total amount of every fruit, in the above example it should be 7 apples and 7 oranges, the problem is that these fruits are seperated in 2 different columns because a fruit name can be both an apple AND an orange, how do I deal with this when counting the total amount of fruit? Counting one at a time works: | stats count by Fruit_1 But how do I count both to give a total number since they are 2 seperate columns I tried combining both columns so its all in 1 long list of values in 1 column but I could not get a definitive answer on how to do this. I tried appending results so first count Fruit_1, then append count Fruit_2 but I did not get the right result of Apple: 7 Orange: 7. Its either 1 or the other. Does anybody have a fix for how to count over multiple fields like this and combine the result together in 1 field?
Sorry for missing the details ... The message came from the app, not Splunk itself. Splunk itself is a standalone instance, version 8.1.5, running on a RHEL 8.10 Linux VM. I downloaded the package ... See more...
Sorry for missing the details ... The message came from the app, not Splunk itself. Splunk itself is a standalone instance, version 8.1.5, running on a RHEL 8.10 Linux VM. I downloaded the package from Splunkbase and installed it with "install app from file". Thank you for taking the time!
hi, i currently have this data and i would like to see if i can extract the date and time and see if it can display the LINE if its within the last 24 hours.   example: current time June 19  resul... See more...
hi, i currently have this data and i would like to see if i can extract the date and time and see if it can display the LINE if its within the last 24 hours.   example: current time June 19  result should be:   drwxrwxrwx 2 root root 4.0K Jun 19 06:05 crashinfo   ---------------------- DATA START below ----------------------- /opt/var.dp2/cores/: total 4.0K drwxrwxrwx 2 root root 4.0K Jun 19 06:05 crashinfo /opt/var.dp2/cores/crashinfo: total 0 /var/cores/: total 8.0K drwxrwxrwx 2 root root 4.0K May 28 06:05 crashinfo drwxr-xr-x 2 root root 4.0K May 28 06:05 crashjobs /var/cores/crashinfo: total 0 /var/cores/crashjobs: total 0 /opt/panlogs/cores/: total 0 /opt/var.cp/cores/: total 4.0K drwxr-xr-x 2 root root 4.0K May 28 06:06 crashjobs /opt/var.cp/cores/crashjobs: total 0 /opt/var.dp1/cores/: total 8.0K drwxrwxrwx 2 root root 4.0K May 28 06:05 crashinfo drwxr-xr-x 2 root root 4.0K May 28 06:07 crashjobs /opt/var.dp1/cores/crashinfo: total 0 /opt/var.dp1/cores/crashjobs: total 0 /opt/var.dp0/cores/: total 8.0K drwxrwxrwx 2 root root 4.0K May 28 06:05 crashinfo drwxr-xr-x 2 root root 4.0K May 28 06:07 crashjobs /opt/var.dp0/cores/crashinfo: total 0 /opt/var.dp0/cores/crashjobs: total 0   ---------------------- DATA END above -----------------------
Hi all, We are indexing different topics from our kafka cluster to an index say, index1. But we now have a requirement to retain a subset of those topics for longer period of time. Is there a way to... See more...
Hi all, We are indexing different topics from our kafka cluster to an index say, index1. But we now have a requirement to retain a subset of those topics for longer period of time. Is there a way to implement this while we still get all the data into the same index? I can think of the subset of topics that I need longer retention being searched, filtered out and collected to a new index. But this involves licensing if we want to retain the source/sourcetype and other fields I believe which is not practical for us. We want to retain the original source/sourcetype etc and have the subset of topics that we need longer retention be copied over to another index, say index2, that has longer retention. We also need the original copy in index1 as we have lot of dependant searches and alerts that use this index to search for the same data.    
The server has no direct access to the internet and we only want to open individual URLs that are required to run the updates. So the question is, which URL the content update needs, to load its cont... See more...
The server has no direct access to the internet and we only want to open individual URLs that are required to run the updates. So the question is, which URL the content update needs, to load its content.