All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does not matter SPAN or BIN equal 10  it creates 4 or 5 buckets   I even gave bin=20  but it returned the same result. I need to agree with @PickleRick  behaviour of  bin is sort of interesting .  ... See more...
Does not matter SPAN or BIN equal 10  it creates 4 or 5 buckets   I even gave bin=20  but it returned the same result. I need to agree with @PickleRick  behaviour of  bin is sort of interesting .  But in fact documentation says : bins Syntax: bins=<int> Description: Sets the maximum number of bins to discretize into. So the Splunk decides how many bin it creates not me    
| rex "requestBody (?<requestBody>\{.*\})$" | spath input=requestBody source.collaborators.entries{}.accessible_by.name output=accessible_by.name
Try something like this | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] groupby ... See more...
Try something like this | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] groupby host | append [|inputlookup Firewall_list.csv | table Primary | Rename Primary AS host | eval count=0] | stats sum(count) as count max(latest_event_time) AS latest_event_time by host
I am surprised that the log export is not sending all logs. Which type of log is missing? Do you know if its possible to get this log through another integration or log collection setup from Github?
You seem to have the "App" filter set to Search & Reporting. Are you sure that the "Weekly Sort Options" savedsearch is in the Search & Reporting app? If not then you can set App to "All" to filter f... See more...
You seem to have the "App" filter set to Search & Reporting. Are you sure that the "Weekly Sort Options" savedsearch is in the Search & Reporting app? If not then you can set App to "All" to filter for all searches visible to you.
Hi Splunkers, My requirement is below . I have lookup where 7 hosts defined . when my search is running for both tstats and stats I only get 5 hosts count which are greater than 0 . Can someone h... See more...
Hi Splunkers, My requirement is below . I have lookup where 7 hosts defined . when my search is running for both tstats and stats I only get 5 hosts count which are greater than 0 . Can someone help how can we get count 0 for the field which we are passing from lookup . Query :  | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] groupby host
Indeed the bin command behaves... interestingly. A run-anywhere example | makeresults count=999 | streamstats count | eval count=count+1 | map maxsearches=10000 search="| makeresults count=10000... See more...
Indeed the bin command behaves... interestingly. A run-anywhere example | makeresults count=999 | streamstats count | eval count=count+1 | map maxsearches=10000 search="| makeresults count=10000 | eval r=random() % 10000 | bin bins=$count$ r | stats count by r|stats count as bins |eval count=$count$" It shows that it splits into either 1, 10, 100 or 1000 buckets. That's... strange.
Not by default, as that would log out active sessions in the web UI. You can trigger a refresh of the auth-services endpoint by specifying it as an entity. I think this will reload authorize.conf bu... See more...
Not by default, as that would log out active sessions in the web UI. You can trigger a refresh of the auth-services endpoint by specifying it as an entity. I think this will reload authorize.conf but I have not tested it. /debug/refresh?entity=admin/auth-services
I am reaching out to request assistance with building a Splunk query that can extract names from the accessible_by arrays in the log data and present them in a single row. I would like a Splunk que... See more...
I am reaching out to request assistance with building a Splunk query that can extract names from the accessible_by arrays in the log data and present them in a single row. I would like a Splunk query that extracts these names and displays them in a single row with the field accessible_by.name. The expected output is accessible_by.name Pradeep Tallam Lakshmi Prasanna Lakamsani Here is an example of the log message format: message: {BOX CM FPN EVENT requestBody {"channel_id":"","event_type":"COLLAB_INVITE_COLLABORATOR","event_id":"11889593183610133","created_at":"2024-08-01T06:16:28-07:00","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"source":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings","created_at":"2024-04-16T07:25:18-07:00","modified_at":"2024-05-15T13:53:17-07:00","description":"","size":20180208521,"path_collection":{"total_count":0,"entries":[]},"created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"modified_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"trashed_at":null,"purged_at":null,"content_created_at":"2024-04-16T07:25:18-07:00","content_modified_at":"2024-05-15T13:53:17-07:00","owned_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"shared_link":null,"folder_upload_email":null,"parent":{"type":"folder","id":"0","sequence_id":null,"etag":null,"name":"All Files"},"item_status":"active","item_collection":{"total_count":5,"entries":[{"type":"file","id":"1516571855910","file_version":{"type":"file_version","id":"1665332707110","sha1":"840bac4b4aeba16ca41c7b7a0b7288a9fd6aecac"},"sequence_id":"0","etag":"0","sha1":"840bac4b4aeba16ca41c7b7a0b7288a9fd6aecac","name":"Chorus deployment part 1.mov"},{"type":"file","id":"1516571257880","file_version":{"type":"file_version","id":"1665332512280","sha1":"0f4147580b7b7b7cde0036124fdda1738f9c1b92"},"sequence_id":"0","etag":"0","sha1":"0f4147580b7b7b7cde0036124fdda1738f9c1b92","name":"Chorus Deployment part 2.mov"},{"type":"file","id":"1516586947142","file_version":{"type":"file_version","id":"1665349857542","sha1":"b953ffaa99b32b66e0a5f607205a5116c0aadc1e"},"sequence_id":"0","etag":"0","sha1":"b953ffaa99b32b66e0a5f607205a5116c0aadc1e","name":"Deployment process of Oomnitza Certs for Retail and AC CR.mov"},{"type":"file","id":"1503788446950","file_version":{"type":"file_version","id":"1651069594950","sha1":"63b30177d283bf8061b7ad4d9712882829950f77"},"sequence_id":"1","etag":"1","sha1":"63b30177d283bf8061b7ad4d9712882829950f77","name":"Roomlookup job deployment.mov"},{"type":"file","id":"1531717232578","file_version":{"type":"file_version","id":"1682330389378","sha1":"f4f0fc6473ce24ef66635ca8d8ff410b734c95a1"},"sequence_id":"0","etag":"0","sha1":"f4f0fc6473ce24ef66635ca8d8ff410b734c95a1","name":"Slack Rooms App weekly deployment.mov"}],"offset":0,"limit":100,"order":[{"by":"type","direction":"ASC"},{"by":"name","direction":"ASC"}]},"collaborators":{"next_marker":"","previous_marker":"","entries":[{"type":"collaboration","id":"55749534901","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"created_at":"2024-08-01T06:16:27-07:00","modified_at":"2024-08-01T06:16:27-07:00","expires_at":null,"status":"pending","accessible_by":null,"invite_email":"ccs_platform_engineering@group.apple.com","role":"editor","acknowledged_at":null,"item":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings"}},{"type":"collaboration","id":"53612889578","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"created_at":"2024-04-28T23:34:46-07:00","modified_at":"2024-04-28T23:34:46-07:00","expires_at":null,"status":"accepted","accessible_by":{"type":"user","id":"9049454819","name":"Pradeep Tallam","login":"pradeep_tallam@apple.com","is_active":true},"invite_email":null,"role":"editor","acknowledged_at":"2024-04-28T23:34:46-07:00","item":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings"}},{"type":"collaboration","id":"53611885725","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"created_at":"2024-04-28T23:40:15-07:00","modified_at":"2024-04-28T23:40:15-07:00","expires_at":null,"status":"accepted","accessible_by":{"type":"user","id":"20437795550","name":"Lakshmi Prasanna Lakamsani","login":"llakamsani@apple.com","is_active":true},"invite_email":null,"role":"editor","acknowledged_at":"2024-04-28T23:40:15-07:00","item":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings"}}]}}}} Could you please assist in creating this query?
What do you mean by "services"? You can see what apps are installed by selecting "Manage apps" from the Apps menu. It's not clear what you want to know about what you are ingesting.  The inputs the... See more...
What do you mean by "services"? You can see what apps are installed by selecting "Manage apps" from the Apps menu. It's not clear what you want to know about what you are ingesting.  The inputs themselves probably are defined on your on-prem forwarders so are not visible from Splunk Cloud.  Of course, the actual data ingested is visible in the indexes.
Yeah typically the way to resolve admin account lockouts is to log in with the CLI, but I don't think you have CLI access. Can you reach out to Splunk support?
So if I understand correctly, you have a list input with multiple values. For each value in the list, you want SOAR to run a query (in a ticketing system?), then if that query result meets a conditio... See more...
So if I understand correctly, you have a list input with multiple values. For each value in the list, you want SOAR to run a query (in a ticketing system?), then if that query result meets a condition, then you would like SOAR to create a ticket with a sys ID from the result of the query? Thus if 4 values in the list meet the condition of the query results, then 4 tickets will be created?
Hello, Is /debug/refresh enough to reload role capabilities for authorize.conf? Thanks.   Splunk entreprise v9
You checked the SPL2 bin command, not the SPL one. https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Bin
Yes, if you add the segment in my last reply then your alert will always find at least one result.
You need to specify your needs more precisely. Finding sources of events is one thing, finding - for example - all hosts shown in firewall logs is a completely different cup of tea.
I currently do a search monthly for searches/jobs that take a long time. I then look up the job and if there is an alert associated with it. I will then work with teams/individuals to see if the job ... See more...
I currently do a search monthly for searches/jobs that take a long time. I then look up the job and if there is an alert associated with it. I will then work with teams/individuals to see if the job can be measured as a metric to trigger in SignalFX instead. However, there are some jobs I cannot find, and I think I am missing some part of the logic of how things work. I run this search        index=_internal sourcetype=scheduler user!=nobody | table _time user savedsearch_name status scheduled_time run_time result_count | sort run_time desc     Results look like this I try to look up the savedsearch_name  (like Weekly Sort Options) on this page -splunkcloud.com/en-GB/app/search/job_manager but nothing comes up.  Some savedsearch_name show up with results, others don't and I'm not sure why?
Has anybody here ever cracked the nut on how to send Splunk messages triggered by an alert to a Microsoft Teams "chat" (not a Teams channel)
Hi all, We recently ran into issues with our heavy forwarder being unable to connect to certain IPs in our Splunk Cloud environment.  07-22-2024 19:20:14.384 +0000 WARN AutoLoadBalancedConnecti... See more...
Hi all, We recently ran into issues with our heavy forwarder being unable to connect to certain IPs in our Splunk Cloud environment.  07-22-2024 19:20:14.384 +0000 WARN AutoLoadBalancedConnectionStrategy [2042120 TcpOutEloop] - Cooked connection to ip=[[IP1]]:9997 timed out 07-22-2024 19:19:54.508 +0000 WARN AutoLoadBalancedConnectionStrategy [2042120 TcpOutEloop] - Cooked connection to ip=[[IP2]]:9997 timed out 07-22-2024 19:19:34.584 +0000 WARN AutoLoadBalancedConnectionStrategy [2042120 TcpOutEloop] - Cooked connection to ip=[[IP3]]:9997 timed out This appears to be a pretty common error based on what I've seen in other community posts, and typically they are related to a firewall issue. I wanted to document that in our case, the issue was related to the IPs not being assigned to indexers on the Splunk Cloud instance.  According to Splunk support, "Usually, these DNS inputs (inputs1.companyname.splunkcloud.com, inputs2.........., inputs15.companyname.splunkcloud.com) resolve to the aforementioned IP tables defined, but these IPs should be then linked to the indexers which is not what we currently have present." I assume this is an uncommon root cause, and wanted to put it out there as another troubleshooting option when investigating this issue.  I hope it helps. 
Hi team,  Basic questions, How do i check the following on Splunk cloud: what services I am using  what applications and what I am ingesting