All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What search have you used to populate the summary index?
output of the query
Sorry for the delay, I found this after searching a for a question which you posted & also answered. I edited 2022-12 to 2023-12 and oneshot the test.csv with header timestamp,account_id,desc. Please... See more...
Sorry for the delay, I found this after searching a for a question which you posted & also answered. I edited 2022-12 to 2023-12 and oneshot the test.csv with header timestamp,account_id,desc. Please try following: earliest=9/1/2023:0:0:0 index=test source=/tmp/test.csv |transaction account_id | rename linecount as totalqueries | table account_id timestamp duration totalqueries | eval frequency=duration/totalqueries | where frequency < 5  
There are probably being removed by the where command i.e. consecutive messages are not the same, and you are left with occurrences which appear 3 or more times (as requested).
Hey thanks for your reply,    I ended up scripting it and just getting it to turn over every 2 minutes.  works for me. Thank you for the idea 
I choose source from forwarded input selection to input in splunk. I can't see sysmon in logs from source. I made the inputs.conf setting via forwarder, unfortunately I couldn't see it again. I have ... See more...
I choose source from forwarded input selection to input in splunk. I can't see sysmon in logs from source. I made the inputs.conf setting via forwarder, unfortunately I couldn't see it again. I have logs. There are forwarders. My other logs are coming. The sysmon log is not coming. I would appreciate your help.   not sysmon log   
Hero of the day. Thank you @gcusello  that was exactly I wanted. what is I want to get events between timastamps (A and B)? for example I have a time say ''2023-09-02T15:22:04.001854200Z' and one ... See more...
Hero of the day. Thank you @gcusello  that was exactly I wanted. what is I want to get events between timastamps (A and B)? for example I have a time say ''2023-09-02T15:22:04.001854200Z' and one '2023-09-02T15:27:04.001854200Z'. I want those query except to set the time myself. Thank you.
My index has logs for multiple Robot jobs so I added a search before the suggested string.... index=ee_rpa_uipath_platform_* AND OrganizationUnitID IN ($folder$) | sort OrganizationUnitID, RobotName... See more...
My index has logs for multiple Robot jobs so I added a search before the suggested string.... index=ee_rpa_uipath_platform_* AND OrganizationUnitID IN ($folder$) | sort OrganizationUnitID, RobotName, _time, Message | streamstats count reset_on_change=true by Message | where count > 2 | table OrganizationUnitID, User, RobotName, ProcessName, MachineName, _time, Message | sort -_time ...but now what I am finding is that ONLY one Robot has its logs being displayed once search complete i.e. whilst search is ongoing other logs for other Robots are displayed in panel but then disappear once search finishes.  Any ideas on why these logs for other Robots are removed from search?     I put the suggested search string in my searh
Hello, I have been trying to integrate Nessus Essentials with SOAR since days but with failure till now, I installed Nessus App in SOAR, and configured the new asset with the APIs from Nessus Essen... See more...
Hello, I have been trying to integrate Nessus Essentials with SOAR since days but with failure till now, I installed Nessus App in SOAR, and configured the new asset with the APIs from Nessus Essentials and the Nessus IP address and port  Nessus server IP/hostname : https://192.168.199.78    I tried http and without it Port that the Nessus server is listening on 8834 when i test connectivity i get : 1 action failed Error Connecting to server. Details: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: //192.168.199.78:8834/users (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fc8b0364940>: Failed to establish a new connection: [Errno -2] Name or service not known'))   I searched the community and other sources but didnt find any thing that can help please, any body can help me?   many thanks  
Hi @corti77, you're right, this Add-on is for the O365 Defender, but for my little knowledge of Defender (I'm not a fan of it!) and it's possible I'm wrong, it should be possible to have Defender l... See more...
Hi @corti77, you're right, this Add-on is for the O365 Defender, but for my little knowledge of Defender (I'm not a fan of it!) and it's possible I'm wrong, it should be possible to have Defender logs from Cloud, using this Add-On. If it isn't possible, sorry for my wrong answer! Ciao. Giuseppe
Hi @gcusello , are you sure that app includes the basic Microsoft Defender included in any Microsoft OS? checking the app documentation mentions Microsoft 365 Defender and Defender for Endpoint pro... See more...
Hi @gcusello , are you sure that app includes the basic Microsoft Defender included in any Microsoft OS? checking the app documentation mentions Microsoft 365 Defender and Defender for Endpoint products.  Those are the EDR and SOAR solutions from Microsoft , no mention of the basic AV logs. https://docs.splunk.com/Documentation/AddOns/released/MSSecurity/Releasehistory thanks  
Hi @sigma, let me understand: you want to filter the logs from index b using the timeastamp +5min found where in index A the condition is fa="your_value", at least the output is fb, is it correct? ... See more...
Hi @sigma, let me understand: you want to filter the logs from index b using the timeastamp +5min found where in index A the condition is fa="your_value", at least the output is fb, is it correct? if this is your requirement, you could try something like this: index=indexB [ | search index=indexA fa="your_value" | eval earliest=_time, latest=relative_time(_time,"+300s") | fields earliest latest ] | table fb Ciao. Giuseppe
I have an index A and another index B. logs in A have a correlation to logs in B. But the only common field between them is 'timestamp'. There is a field 'fa' in index A and field 'fb' in index B. t... See more...
I have an index A and another index B. logs in A have a correlation to logs in B. But the only common field between them is 'timestamp'. There is a field 'fa' in index A and field 'fb' in index B. timestamp in index A logs has a +5 minutes drift with index B. Now I want to write a query to match field 'fa' in index A and find corresponding log based on timestamp (with +5 minutes drift) on index B and get me field 'fb' in index B.
Hi @mjh, what is your question? if you want to know if the solution you shared is correct, you are the one that can perform the check: have you results? if yes, it's correct, if not, you have to d... See more...
Hi @mjh, what is your question? if you want to know if the solution you shared is correct, you are the one that can perform the check: have you results? if yes, it's correct, if not, you have to debug, probably there some error in fields extractions. Ciao. Giuseppe
Hi @BrC_Sys99, what do you mean with "Hybrid during migration"? if you mean sending logs both the Splunk Cloud and On-Premise, it's easy. If you have some infrastructures on premise, it's a best p... See more...
Hi @BrC_Sys99, what do you mean with "Hybrid during migration"? if you mean sending logs both the Splunk Cloud and On-Premise, it's easy. If you have some infrastructures on premise, it's a best practice to use one (or better two) Heavy Forwarder as a concentrator of all the logs from on-premise infrastructure. In this way you don't need to open firewall routes between all your servers and applianes and Splunk Cloud. but you must open only the routes netween the two Heavy Forwarders and Splunk Cloud. Using this architecture, you could create (on the HFs) a fork that duplicates data flows sending all data both to the old on-premise indexers and to the Splunk Cloud. When migration will finish, you'll remove the fork and you'll have all the logs only on Splunk Cloud and you'll be able to dismiss the old Splunk infrastructure. The only role that you must maintain of the old infrastructure is the Deployment Server if you have more than 50 clients to manage. Ciao. Giuseppe
When I create report and enable summary index, the results are getting in the below format.   Table: id    _time 1      2022-06-01 12:01:30.802 1      2022-06-01 12:11:47.069   But when... See more...
When I create report and enable summary index, the results are getting in the below format.   Table: id    _time 1      2022-06-01 12:01:30.802 1      2022-06-01 12:11:47.069   But when I call this summary index using spl query, milliseconds are missing in _time column.   Query I have used, index="summary" report="yy" |eventstats max(search_now) as latestsearch by id, report |where search_now = latestsearch   This query is to fetch latest run result
Hi @monawwer, when you speak of quality, I suppose you're speaking about the optimization of search code to have faster searches. About this there are some best practices that you can find in Splun... See more...
Hi @monawwer, when you speak of quality, I suppose you're speaking about the optimization of search code to have faster searches. About this there are some best practices that you can find in Splunk documentation , e.g.: https://docs.splunk.com/Documentation/SplunkCloud/latest/Search/Quicktipsforoptimization https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Optimizing_search https://docs.splunk.com/Documentation/Splunk/9.1.0/Search/Writebettersearches https://docs.splunk.com/Documentation/Splunk/9.1.0/Search/Built-inoptimization https://www.tutorialspoint.com/splunk/splunk_search_optimization.htm https://www.javatpoint.com/splunk-search-optimization https://www.youtube.com/watch?v=aSDZ_znuzvM About code, there aren't specific rules because the code is guided by Splunk itself so you cannot think up strange constructs. in conclusion, as you can read in the community there some best practices to apply e.g.: Time is the most efficient filter in Splunk The most effective thing you can do is to narrow by time   Specify one or more index values at the beginning of your search string In this way the search is limited only to the index instead all the indexes   The more you tell the search engine, the better chance for good result   When applicable, searching for "access denied" is always better than searching for "denied"   To make searches more efficient, include as many terms as possible   You want to find events with "error" and "sshd" 90% of the events include "error", but only 5% "sshd", include both values in the search   Inclusion is generally better than exclusion   Searching for "access  denied" is faster than NOT   "access  granted"   Apply powerful filtering commands as early in your search as possible   Filtering to one thousand events and then ten events is faster than filtering to one million events and then narrowing to ten   move earch terms as left as possible, For example, remove duplicates events, then sort   Avoid using wildcards at the beginning or middle of a string   Wildcards at beginning of strings scan alla events within timeframe   Wildcards in middle of string may return inconsisten results   So use fail* (not *fail or *fail* or f*il)   When possible, use OR instead of wildcards   For example, us (user=admin OR user=administrator) instead of user=admin*   use join or transaction commands only when there sin't any other solution, use only the fields you need, if possible use Post Process Searches to reduce searches executions in the dashboards, use acceleration methods (e.g. summary indexes, DataModels, etc...) etc...  Ciao. Giuseppe
I am new to Splunk so I'm learning and I know that it can do quite a bit.  I am searching for similar network traffic for users based on our proxy indexes.  I want to know if there is a particular si... See more...
I am new to Splunk so I'm learning and I know that it can do quite a bit.  I am searching for similar network traffic for users based on our proxy indexes.  I want to know if there is a particular site visited by all of the users in our list of 50 or so.  so user and url are necessary.  I need to pull it from all of their data in our network proxy though.  here is a redacted portion of a search I have honed down to but feel free to suggest something better. Edit to provide a clear question:  The below search doesn't work, can you provide a different search or edits that would assist me in getting the data I'm looking for? index=<network one> <userID> IN (userID1,userID2) AND url=* | stats dc(userID) as count by url | where count=2
I am not sure why Splunk cannot handle spaces and pipe as delimiter.  Have you tried the rex command in my emulation? | rex "(?<ts>(\S+\s){3}) (?<event_name>\w+)\|(?<task_id>\d+) (?<event_id>\d+)" ... See more...
I am not sure why Splunk cannot handle spaces and pipe as delimiter.  Have you tried the rex command in my emulation? | rex "(?<ts>(\S+\s){3}) (?<event_name>\w+)\|(?<task_id>\d+) (?<event_id>\d+)"  In your case, you probably do not need ts extraction because Splunk already gives you _time.
Splunk Hybrid Search has been replaced by Federated Search (FS).  It allows you to search both your local and Cloud indexes from the same search head.  There are plenty of caveats to FS so I don't re... See more...
Splunk Hybrid Search has been replaced by Federated Search (FS).  It allows you to search both your local and Cloud indexes from the same search head.  There are plenty of caveats to FS so I don't recommend it for general use. You can, and this is very common, send your data to both your local indexers and to Splunk Cloud indexers at the same time.  That lets you use your on-prem system for historical searches while populating Splunk Cloud with data for a future cutover. Finally, it's also possible to transfer your data from your on-prem indexers to Splunk Cloud and switch over immediately to using Cloud.  That, however, requires Splunk Professional Services.