All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Did you want cids to contain that GUID? Try | rex field=log ".*customers\s(?<cids>.*)" Alternatively, if the GUID is always at the end, following a space, you can even drop the "customers" ... See more...
Did you want cids to contain that GUID? Try | rex field=log ".*customers\s(?<cids>.*)" Alternatively, if the GUID is always at the end, following a space, you can even drop the "customers" part: | rex field=log "(?<cids>\S+$)" Your example appears to be creating a capture group named "cids" that captures nothing (the first set of parentheses), and then a second non-capturing group that matches what you want (the second set of parentheses). This document might help explain in more detail: https://docs.splunk.com/Documentation/SCS/current/Search/AboutSplunkregularexpressions#Capture_groups_in_regular_expressions 
I want to exact a string 'GUID" from the log right after "customers". This regex expression works in https://regex101.com/ but not in Splunk.  My field name is log: 2023-06-19 15:28:01.726 ERROR [co... See more...
I want to exact a string 'GUID" from the log right after "customers". This regex expression works in https://regex101.com/ but not in Splunk.  My field name is log: 2023-06-19 15:28:01.726 ERROR [communication-service,6e72370er2368b08,6e723709fd368b08] [,,,] 1 --- [container-0-C-1] c.w.r.acc.commservice.sink.ReminderSink : Reminder Message processed, no linked customers aaf60d69-99a9-41f5-a081-032224284066   | rex field=log "(?<cids>).*customers\s(.*)"  
Before you do your eval statement, test that your extraction works. In your query, use a rex statement to see test this. ... | rex field=<your_field> "\"path\"\:\"auth\/(abc|xyz)\/login\/(?<Use... See more...
Before you do your eval statement, test that your extraction works. In your query, use a rex statement to see test this. ... | rex field=<your_field> "\"path\"\:\"auth\/(abc|xyz)\/login\/(?<User>[\w\_]+)" ... Then once you confirm you extracting your User field values, add the eval statement in the query. Once you confirm that works, you can then go back to your sourcetype, and modify your extract and eval lines.  --- If this reply helps you, Karma would be appreciated.
Hello, I am trying to change the email address of my Splunk community account. I went to My settings > Personal > Email and set the new email address. I got the verification email and verified the n... See more...
Hello, I am trying to change the email address of my Splunk community account. I went to My settings > Personal > Email and set the new email address. I got the verification email and verified the new email address. Now the new email address was displayed under My settings. However, when I logged out and then logged back in, the old email address is shown again. Is this a known issue?
I believe that your scenario could be accomplished with Ingest Actions: https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/DataIngest This should support cloning data and applying different ... See more...
I believe that your scenario could be accomplished with Ingest Actions: https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/DataIngest This should support cloning data and applying different filtering rules and routing to the two streams.
Thank you! Just like that it works and only in 1 line
| eval fruit=mvappend(fruit1,if(fruit2!="NULL",fruit2,null())) | stats count by fruit
A little update: I now got to a point where I have the following situation: Fruit_1 count Fruit_2 count Apple 5 null null Orange 10 null null Pear 5 Apple 5 Melon 10 Orange 10 How do I get i... See more...
A little update: I now got to a point where I have the following situation: Fruit_1 count Fruit_2 count Apple 5 null null Orange 10 null null Pear 5 Apple 5 Melon 10 Orange 10 How do I get it so that the amount of apples and oranges from column Fruit_1 and Fruit_2 are combined into 1 big fruit list and 1 count list? So the result should be: Fruit Count Apple 10 Orange 20 Pear 5 Melon 5
Hi @richgalloway ,even TCP connection is setup to the indexer and its port. No firewall blocking as well but still no events being returned on search. 
Hi All, We have Splunk Security ENT 6.6.2 - EOL, I know! our admins guys are working on upgrading. My Problem. We created 2 new user groups. Team A and Team B We gave Team A - Total access to dat... See more...
Hi All, We have Splunk Security ENT 6.6.2 - EOL, I know! our admins guys are working on upgrading. My Problem. We created 2 new user groups. Team A and Team B We gave Team A - Total access to data in half the indexes. Role restrictions on indexes We gave Team B - Total access to data in the other half the indexes. Role restrictions on indexes The outcome was as expected, Team A can only see data from indexes for their role and likewise for Team B. This is where we have a problem, Both Teams need to user the Incident Review Dashboard and Both teams need to assign notable events to users within their own Team. As Owners. However, they cannot, and the system gives errors. If we take the role restriction off. So both teams can see all Data. Then they can assign notable events. Our internal Splunk admin, say it is a bug in this version and the system needs to be upgraded. My questions, Has anyone experienced similar?  Is there a bug and if so, any reference that can be found on the bug? Are there any workarounds regarding this problem? We have 2 teams that need to use the Incident Review to respond to alerts. However, these teams need to be independent and should not be able to see data within indexes that belongs to the other Team. Thanks for any advise.
I have a few questions on how splunk sees and displays the license warning counts. Yes if you go over your pool size then that equals a warning count. However, several instances I see some conflictin... See more...
I have a few questions on how splunk sees and displays the license warning counts. Yes if you go over your pool size then that equals a warning count. However, several instances I see some conflicting information like when I add a new license that is bigger than the previous one, I would think the warning count would reset but it doesn’t. I also have a search that looks at the license usage.log and shows me how many times I have went over my size in the last 30 days. This also has different counts than what is shown in the warning count section. The final weird issue I see is when I had a sever warning count at 44 but a week later within any changes, the number decreased to 37. What’s causes so many different numbers with the Splunk licenses
since moving to 9.2.1, now my df.sh events are now a single event when searching. also notice the format is bad when running the script compared to the built in df. novice linux guy here looking to s... See more...
since moving to 9.2.1, now my df.sh events are now a single event when searching. also notice the format is bad when running the script compared to the built in df. novice linux guy here looking to see if anyone else has come across this. thanks! splunk df     linux df         splunk event
Hello all, I need to configure SAML/SSO with Splunk but i m having the following issues: - I have 3 search heads in a cluster (without a load balancer )    => I can create a dedicated SAML confi... See more...
Hello all, I need to configure SAML/SSO with Splunk but i m having the following issues: - I have 3 search heads in a cluster (without a load balancer )    => I can create a dedicated SAML config for each search head and disable the replication of the authentication.conf - we have many tenants and we have users connecting from the different tenants to Splunk (currently we have multiple LDAP configurations)  => I understood that Splunk only accepts one IdProvider with SAML, so users from other tenants will not be able to access splunk with SSO. - ideally, we must have some users connecting with LDAP, but Splunk doesn't allow enabling both LDAP and SAML simultaneously  or it is possible but requires a custom script for that. Questions: 1-  does anyone have worked on a script to enable LDAP and SAML ?  2- Any idea about the best config from Azure ID regarding the multi-tenants and the B2B collaboration? 3- Any advice in general how to better approach this issue?  Best  
Thank @harsmarvania57, my bad. It worked as well. I want to write another script that used Splunk SDK, which does not depend on Splunk lib, or must run the script in Splunk server.  Any way, I near... See more...
Thank @harsmarvania57, my bad. It worked as well. I want to write another script that used Splunk SDK, which does not depend on Splunk lib, or must run the script in Splunk server.  Any way, I nearly finished with my the script by using SDK.  Thank for your help and your script helped me a lot!
thank you very much. it works.
Try this ``` Parse the date ``` | rex "\s(?<date>\w{3}\s\d{1,2})\s" ``` Convert the date into epoch form ``` | eval epoch=strptime(date, "%b %d") ``` See if the date falls in the last 24 hours ``` |... See more...
Try this ``` Parse the date ``` | rex "\s(?<date>\w{3}\s\d{1,2})\s" ``` Convert the date into epoch form ``` | eval epoch=strptime(date, "%b %d") ``` See if the date falls in the last 24 hours ``` | where epoch > relative_time(now(), "-24h")
Hi @glc_slash_it , here it is. Although I am not getting the specific lines, instead the whole log is getting indexed.   transforms.conf [err_line] REGEX = ^(?!.error) DEST_KEY = _MetaData:Index FO... See more...
Hi @glc_slash_it , here it is. Although I am not getting the specific lines, instead the whole log is getting indexed.   transforms.conf [err_line] REGEX = ^(?!.error) DEST_KEY = _MetaData:Index FORMAT = error_idx props.conf [err_src] TRANSFORMS-err_line = err_line    
Try something like this | rex max_match=0 "(?m)^(\S+ ){5}(?<datetimefile>\w+ +\d+\s+\d+:\d+\s+\S+)$" | mvexpand datetimefile | eval timestamp=strptime(datetimefile,"%b %d %H:%M") | where now()-times... See more...
Try something like this | rex max_match=0 "(?m)^(\S+ ){5}(?<datetimefile>\w+ +\d+\s+\d+:\d+\s+\S+)$" | mvexpand datetimefile | eval timestamp=strptime(datetimefile,"%b %d %H:%M") | where now()-timestamp < 24*60*60
unfortunately it still breaks into two events and I wanted to receive only 1 event: Time Event 1 6/14/24 7:56:39.168 AM         "TimeStamp":  "\/Date(1718366199168)\/",         "ID":  7082,    ... See more...
unfortunately it still breaks into two events and I wanted to receive only 1 event: Time Event 1 6/14/24 7:56:39.168 AM         "TimeStamp":  "\/Date(1718366199168)\/",         "ID":  7082,         "Parameters":  null,     {     }, Show all 6 lines   ------------------------------------------------ 2 6/14/24 7:56:39.013 AM         "SplunkTime":  "1718366199.01303",         "Source3":  null,         "Source2":  null,         "Source1":  null,         "ProcessPIUser":  null, Show all 15 lines
For context: this question is regarding use cases/user stories for Splunk. A use case can be linked to multiple user stories, and I want to count the amount of total user stories.