All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@jagannathbhatbb- This settings should be present there, I'm not sure if it is not present for trial Splunk Cloud instance.
There are few stuff that will be useful: You can use Monitoring Console's alert and dashboard Dashboard -> Splunk Settings > Monitoring Console > Forwarders: Deployment If setup has not don... See more...
There are few stuff that will be useful: You can use Monitoring Console's alert and dashboard Dashboard -> Splunk Settings > Monitoring Console > Forwarders: Deployment If setup has not done, then do the setup first (it will give you link to setup) Alert -> Splunk Settings > Searches Reports & Alerts Select App as Monitoring Console Select Owner as All And search for Missing Forwarder Enable the alert -> "DMC Alert - Missing forwarders" and add your email to receive alerts on the email There is one more search you can run to see what data forwarder is sending: | tstats count where index=* host="<forwarder-host-name>" by index, sourcetype I hope this helps!!! Kindly upvote!!!
Map is generally NOT a solution to searches. This is a potential use of a subsearch, i.e. index="<indexname>" source = "user1" OR source = "user2" [ search index="<indexname>" source = "user1" OR ... See more...
Map is generally NOT a solution to searches. This is a potential use of a subsearch, i.e. index="<indexname>" source = "user1" OR source = "user2" [ search index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | rex field=message "(?<dynamic_text>jobId:\s*\w+)" | search dynamic_text!=null | stats values(dynamic_text) AS dynamic_text ] So here you are using a subsearch to get all the dynamic_text values you want and then that is passed as a constraint to the outer search.  
Can you describe your intended output - it's challenging to reverse engineer SPL to understand what you are trying to do - if you can say from your data what you would like to see from that output, i... See more...
Can you describe your intended output - it's challenging to reverse engineer SPL to understand what you are trying to do - if you can say from your data what you would like to see from that output, it would be helpful. Did you try the SPL I posted and if so, did it give you a starting point to produce your results?
Hello, I’m working on creating a Splunk troubleshooting Dashboard for our internal team, who we are new to Splunk, to troubleshoot forwarder issues—specifically cases where no data is being received... See more...
Hello, I’m working on creating a Splunk troubleshooting Dashboard for our internal team, who we are new to Splunk, to troubleshoot forwarder issues—specifically cases where no data is being received. I’d like to know the possible ways to troubleshoot forwarders when data is missing or for other related issues. Are there any existing dashboards I could use as a reference? also, what are the key metrics and internal index REST calls that I should focus on to cover all aspects of forwarder troubleshooting?  #forwarder #troubleshoot #dashboard
I found the problem, I needed to add the following to the inputs.conf file of UF, I don't know if this is a problem after the update or if it was also needed before, obviously when I typed it they sh... See more...
I found the problem, I needed to add the following to the inputs.conf file of UF, I don't know if this is a problem after the update or if it was also needed before, obviously when I typed it they showed [default] host = 192.168.90.233    
this is unfortunately.. not fixed 
Hi All, I am searching UiPath Orchestrator Logs in Splunk as following:   index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | rex field=message "(?<dyna... See more...
Hi All, I am searching UiPath Orchestrator Logs in Splunk as following:   index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | rex field=message "(?<dynamic_text>jobId:\s*\w+)" | search dynamic_text!=null | stats values(dynamic_text) AS extracted_texts | map search="index="<indexname>" source = "user1" OR source = "user2" dynamic_text=\"$extracted_texts$\""   with my above search, I'll have to reference the jobId matched field from the first search to get other matching records to process transaction details Thanks a lot in advance!
Another approach is to (almost) use stats:-) as you original proposed.  With a little help of foreach.   | stats values(value_a) as value_a | eval product = 1 | foreach value_a mode=multivalue ... See more...
Another approach is to (almost) use stats:-) as you original proposed.  With a little help of foreach.   | stats values(value_a) as value_a | eval product = 1 | foreach value_a mode=multivalue [eval product = product * <<ITEM>>]   Here is an emulation you can play with and compare with real data:   | makeresults format=csv data="value_a 0.44 0.25 0.67" ``` data emulation above ```   The above search gives this result value_a product 0.25 0.44 0.67 0.074
Thank you for your reply. I deeply apologize for the issue I described! Our sample data is as follows: 2024-12-12 00:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-12... See more...
Thank you for your reply. I deeply apologize for the issue I described! Our sample data is as follows: 2024-12-12 00:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-12 08:30:14 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes " 2024-12-12 11:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-13 15:30:55 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes " 2024-12-13 00:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-14 19:30:30 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes " 2024-12-14 22:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ The field title is: opr_time oprt_user_acct oprt_user_name blng_dept_name is_cont_sens_acct
Correct, you can't multiply down columns, but you can with rows, so depending on how many events and columns you want the product of, you could do something like this | makeresults format=csv data="... See more...
Correct, you can't multiply down columns, but you can with rows, so depending on how many events and columns you want the product of, you could do something like this | makeresults format=csv data="value_a,value_b,otherValue 0.44,1,10 0.25,2,20 0.67,3,30" | appendpipe [| transpose 0 | eval product = 1 | foreach row* [eval product = product * '<<FIELD>>'] | eval {column} = product | stats values(value_a) as value_a values(value_b) as value_b]
Some additional information that was somehow omitted from my original post.... If I change date range part of the query from "(7 * 8)" to just "7".  The query runs fine.  If I change "(7 * 8)" to "(... See more...
Some additional information that was somehow omitted from my original post.... If I change date range part of the query from "(7 * 8)" to just "7".  The query runs fine.  If I change "(7 * 8)" to "(7 * 1)", the query runs fine.  If I change "(7 * 8)" to "(7 * 2)"... or any number greater than 2... the query fails with the same error as mentioned in the original post.
We are using the Salesforce Add-On 4.8.0 and we have the username lookup enabled and it seems to be working properly except for the user type.  It is setting all users to standard.  Has anyone seen t... See more...
We are using the Salesforce Add-On 4.8.0 and we have the username lookup enabled and it seems to be working properly except for the user type.  It is setting all users to standard.  Has anyone seen this or know how to fix it?
After some more searching I found SEC1936B .conf23 and followed the file configuration instructions. I have TLS connections now. Thank you for your time.
N.A  
I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But thi... See more...
I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But this doesn't seem possible.
That was just for checking how many fields are returned from the sample of data. Of course it's not suitable for production search
Hi @CyberWolf, There's a tendency among practitioners to bin time into buckets rounded to the nearest time interval, e.g. 1 hour: 00:00, 01:00, 02:00, etc.; however, this results in counting errors.... See more...
Hi @CyberWolf, There's a tendency among practitioners to bin time into buckets rounded to the nearest time interval, e.g. 1 hour: 00:00, 01:00, 02:00, etc.; however, this results in counting errors. Instead, count using a rolling window in ascending _time order:   index=VPN_Something status=failure | stats count by _time user | streamstats time_window=1h sum(count) as failure_count by user | where failure_count>5   Since you're only interested in a 1-hour window, your search time range only needs span the last hour plus any allowance for ingest lag and a buffer to accommodate your scheduling interval. See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Streamstats for information on scaling streamstats. If you're using accelerated data models or indexed fields or if your raw events are structured in key-value pairs separated by minor breakers, you can use tstats to greatly improve the performance of the search. If you have the capacity, you might also consider a real-time search that counts events as they're indexed, although the results may be incorrect relative to your requirements. If you have Splunk Enterprise Security, look at the "Access - Excessive Failed Logins - Rule" correlation search. For reference, it's a real-time search scheduled every 5 minutes (*/5 * * * *), with earliest=rt-65m@m and latest=rt-5m@m:   | from datamodel:"Authentication"."Failed_Authentication" | stats values("tag") as "tag",dc("user") as "user_count",dc("dest") as "dest_count",count by "app","src" | where 'count'>=6   In the Authentication data model, app would be something like vpn, and src would be a device identifier. As @isoutamo wrote, there are many approximate solutions to this problem. The correct solution depends on your requirements and your tolerance for counting errors.
Hi you could find quite many examples for this with query  site:community.splunk.com%20login%20failed%20more%20than%205%20times%20per%20hour%20solved just copy paste this to google.  In your ... See more...
Hi you could find quite many examples for this with query  site:community.splunk.com%20login%20failed%20more%20than%205%20times%20per%20hour%20solved just copy paste this to google.  In your example there is at least one misunderstanding. You have add “bin _time span=24h”, later you are expecting that your _time is divided into 1 hour span. But just google those examples and then change your query or just create a new. With splunk there is rarely only one correct solution! Happy splunking!
You cannot add own css code into Dashboard Studio.