All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Great response, mille grazie Giuseppe;   On the back of that if say the client asks to show them a simpler way for example a gui way, how do I go about checking that; thank you in advance. 
Hi @Roy_9, I tried many times to unistall this App but all the times I had continous annoying messages, so at least I leaved this app making it not visible. Ciao. Giuseppe
Hi @zijian, I see two or three possible issues: you don't have enough disk space for dispatched artifacts, how much space have in the splunk file system? probably your storage isn't so performant... See more...
Hi @zijian, I see two or three possible issues: you don't have enough disk space for dispatched artifacts, how much space have in the splunk file system? probably your storage isn't so performant: Splunk requires at least 800 IOPS (better 1200), check your storage performance using a tool as Bonnie++, have you sufficient resources (CPUs)? Splunk requires at least 12 CPUS and more than 16 if yu have Premium Apps like ES or ITSI. Anyway, after these checks open a case to Splunk Support, because they can, using a diag, analyze your system and give your a detailed answer. Ciao. Giuseppe
Hi @ZombieT, if you have to advice your customer about indexes, remember always that an index is a silos that contains all kind of events with the same retention time and the same access grants: an ... See more...
Hi @ZombieT, if you have to advice your customer about indexes, remember always that an index is a silos that contains all kind of events with the same retention time and the same access grants: an index isn't a database table; you define data characteristics using sourcetype, not index. Anyway, you can know if an index is used, and if not, when it was used for the last time running a search like this: | eventcount summarize=false index=* | dedup index or better | tstats count latest(_time) AS latest WHERE index=* BY index | append [ | eventcount summarize=false index=* | dedup index | eval count=0 | fields index count ] | stats sum(count) AS total values(latest) AS latest BY index | eval latest =strftime(latest,"%Y-%m-%d %H:%M:%S"), status=if(total=0,"No events","Last event at ".latest) | table index status Ciao. Giuseppe
Hi @marksmith991, if you read the Gartner or Forrester Reports about SIEMs, you find Splunk as a leader in this market sector; in your vision, is a SIEM a Security tool? I think that a SIEM (and Sp... See more...
Hi @marksmith991, if you read the Gartner or Forrester Reports about SIEMs, you find Splunk as a leader in this market sector; in your vision, is a SIEM a Security tool? I think that a SIEM (and Splunk is a SIEM market leader) is one of the milestones of each security platform (not only tool!). Then you can expand your solution using a SOAR (as Splunk Phantom), an Enterprise User Behaviour solution (as Splunk UBA), threat intelligence feeds, and many other apps that you can use on Splunk. About Strategies, I think that a security strategy must start from the board of the company, descend on all the employees and find application in many solutions that anyway must start from the SIEM, or (better) from the Security Operation Center (SOC). It's finished the vision that security are tools as firewalls or antivirus installed in the company network: today security is an approach from the board to all the employees that use integrated technology solutions (still note solutions, not tools!) in continue evolution. Ciao. Giuseppe
Hi @LearningGuy , as I said, you don't hide fields in the base search: in base search you need to put all the fields you need in the dashboard' s panels, then in each panel yu use only the fields yo... See more...
Hi @LearningGuy , as I said, you don't hide fields in the base search: in base search you need to put all the fields you need in the dashboard' s panels, then in each panel yu use only the fields you need. The base search is the starting point of all the panels' searches. One additional hint: if you don't use a streming commad (as stats or timechart, etc...) the advantage of base search is limited. Ciao. Giuseppe
Hi @pjcable, as yu can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system to add a stanza in outputs.conf i... See more...
Hi @pjcable, as yu can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system to add a stanza in outputs.conf isn't enough, follow the configuration at the above link. Ciao. Giuseppe
Hello Splunkers!! I have upgraded Splunk with 9.1.1 latest version for windows server. But after upgaradtion I can see "loading" page in the top. Due to this most of the capabilities I am not able t... See more...
Hello Splunkers!! I have upgraded Splunk with 9.1.1 latest version for windows server. But after upgaradtion I can see "loading" page in the top. Due to this most of the capabilities I am not able to access and Splunk is not working properly. Please suggest some workaround on this.    
Hi, We need to send some security events to an external party.  We also need this for our internal use. On my test instance I've configured outputs.conf as   [tcpout] defaultGroup = security in... See more...
Hi, We need to send some security events to an external party.  We also need this for our internal use. On my test instance I've configured outputs.conf as   [tcpout] defaultGroup = security indexAndForward = 1 [tcpout:security] server = localhost:9999 Which has got my events flowing to my fake external server and leaves them accessible in the internal side. However I only want to send 2 source types there. How do i filter out the rest of the events?  
Good morning yuanliu, Thank you very much for such detailed response. I will go through the proposed solutions and let you know how this worked for us. As to the format of the log, this is a standa... See more...
Good morning yuanliu, Thank you very much for such detailed response. I will go through the proposed solutions and let you know how this worked for us. As to the format of the log, this is a standard Windows Active Direcotry log. There is no way we can change the format. Many other Windows / AD logs will have similar structure. Not much we can do about this. Again, thank you for your answer. Kind Regards, Mike.
Hi @ITWhisperer ,    Thanks for your response!    Version : 9.0.5    App - Endpoint Cockpit    The dashboard is from  <row> <panel id="global"> <title>Global Compliance</title> <html> <... See more...
Hi @ITWhisperer ,    Thanks for your response!    Version : 9.0.5    App - Endpoint Cockpit    The dashboard is from  <row> <panel id="global"> <title>Global Compliance</title> <html> <style> .dashboard-body { background: #0000FF !important; } .dashboard-header h2{ color: #0000FF !important; } </style> <div class="infobutton" parent="global_status_op" type="collapse" style="display: none"> <p style="font-size:15pt;"> The compliance is calculated as follow:</p> <p style="font-size:9pt;"> - If ENS version AND Agent version complinat</p> <p style="font-size:9pt;"> - If one among ENS, compliant</p> </div> </html> </panel> </row> Thanks!  
Usecase is to find the threshold for the maximum attackers_score of the domain group and it's attackerip count for the maximum attacker_score from a single ip. Do you mean that the threshold cal... See more...
Usecase is to find the threshold for the maximum attackers_score of the domain group and it's attackerip count for the maximum attacker_score from a single ip. Do you mean that the threshold calculation is not to be used in the alert.  And that you want to select every IP with the highest count in the group and send as alert. Now, back to the discussion about min and max.  Assuming you still want a different formula when range is too small as I speculated, you can do index=ss group="Threat Intelligence" ``` here I'm grouping the domain names in to single group by there naming convention``` | eval domain_group=case( like(domain_name, "%cisco%"), "cisco", like(domain_name, "%wipro%"), "wipro", like(domain_name, "%IBM%"), "IBM", true(), "other" ) | stats count as hits, min(attacker_score) as min_score, max(attacker_score) as max_score by domain_group, attackerip | sort -hits | eval range = max_score - min_score | eval threshold =round(if(range > min_score / 10), min_score + (2 * (range/3)), max_score * 4 / 5), 0) | eventstats max(hits) as max_hits by domain_group ``` eventstats instead of streamstats ``` | where hits == max_hits | table domain_group, min_score, max_score, attackerip, hits, threshold If this does not give you the desired output, you will need to illustrate the input, actual output, (anonymize as needed) desired output, and explain the logic between input and desired output without using SPL.
I'm curious about Splunk and its role in cybersecurity. Can anyone shed some light on whether Splunk is classified as a cybersecurity tool? How does it contribute to cybersecurity strategies, and are... See more...
I'm curious about Splunk and its role in cybersecurity. Can anyone shed some light on whether Splunk is classified as a cybersecurity tool? How does it contribute to cybersecurity strategies, and are there specific use cases that make it stand out in the realm of cybersecurity tools? Appreciate any insights or experiences you can share.     Regards: @marksmith991 
Hello, I am fairly familiar to spunk, but I do need to improve on indexes. I am currently working on a new client environment and they have a large amount of indexes within splunk, however some of th... See more...
Hello, I am fairly familiar to spunk, but I do need to improve on indexes. I am currently working on a new client environment and they have a large amount of indexes within splunk, however some of them are inactive.  A couple of question: >How can I determine if an index is active/connected properly >is there an easier way to show the above; for example if there's 100 indexes how can I find out which are still active in a graph or a more visual view.  Hope it makes sense. Thank you in advance for any advice. 
Hi, One of our three clustered indexers is having search errors and high CPU fluctuations for splunkd main process after an improper reboot as follows: In splunk web search: remote search process... See more...
Hi, One of our three clustered indexers is having search errors and high CPU fluctuations for splunkd main process after an improper reboot as follows: In splunk web search: remote search process failed on peer Search results might be incomplete: the search process on the peer:[Affected indexer] ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. [Affected indexer] Search process did not exit cleanly, exit_code=111, description="exited with error: Application does not exist: Splunk_SA_CIM". Please look in search.log for this peer in the Job Inspector for more info. In splunkd.log of affected indexer: WARN SearchProcessRunner [31756 PreforkedSearchesManager-0] - preforked process=0/101006 with search=0/127584 exited with code=111 ERROR SearchProcessRunner [31756 PreforkedSearchesManager-0] - preforked search=0/127584 on process=0/101006 caught exception: used=1, bundle=7471316304185390773, workload_pool=, generation=11, age=7.418, runtime=7.203, search_started_ago=7.204, search_ended_ago=0.000 ERROR SearchProcessRunner [31756 PreforkedSearchesManager-0] - preforked process=0/101006 with search=0/127584 and cmd=splunkd\x00search\x00--id=remote_SH-ES_scheduler__splunkadmin__SplunkEnterpriseSecuritySuite__RMD5852d4ed30e6a890b_at_1698892200_90939\ x00--maxbuckets=0\x00--ttl=60\x00--maxout=0\x00--maxtime=0\x00--lookups=1\x00--streaming\x00--sidtype=normal\x00--outCsv=true\x00--acceptSrsLevel=1\ died on exception (exit_code=111): Application does not exist: SplunkEnterpriseSecuritySuite   WARN PeriodicReapingTimeout [30157 DispatchReaper] - Spent 10650ms reaping search artifacts in /splunk/var/run/splunk/dispatch WARN DispatchReaper [30157 DispatchReaper] - The number of search artifacts in the dispatch directory is higher than recommended (count=6608, warning threshold=5000) and could have an impact on search performance. Remove excess search artifacts using the "splunk clean-dispatch" CLI command, and review artifact retention policies in limits.conf and savedsearches.conf. You can also raise this warning threshold in limits.conf / dispatch_dir_warning_size.   WARN DispatchManager [13827 TcpChannelThread] - quota enforcement for user=splunk_user1, sid=soc_user1_c29jX2Njb191c2VyMQ__SplunkEnterpriseSecuritySuite__RMD57f02abc0263583b0_1697962710.21728, elapsed_ms=23865, cache_size=1591 took longer than 15 seconds. Poor search start performance will be observed. Consider removing some old search job artifacts.   Regards, Zijian
Do it in two steps just like you illustrated manually.   | stats values(score) as score by vuln ip | stats sum(score) by ip   This is an emulation of your sample data to compare with real data  ... See more...
Do it in two steps just like you illustrated manually.   | stats values(score) as score by vuln ip | stats sum(score) by ip   This is an emulation of your sample data to compare with real data   | makeresults format=csv data="ip,vuln,score 1.1.1.1,vuln1,0 1.1.1.1,vuln1,0 1.1.1.1,vuln2,3 1.1.1.1,vuln2,3 1.1.1.1,vuln2,3 1.1.1.1,vuln3,7 1.1.1.1,vuln3,7 2.2.2.2,vuln1,0 2.2.2.2,vuln4,0 2.2.2.2,vuln5,5 2.2.2.2,vuln5,5" ``` data emulation ```   This emulation will give ip sum(score) 1.1.1.1 10 2.2.2.2 5
Hi @yuanliu , Usecase is to find the threshold for the maximum attackers_score of the domain group and it's attackerip count for the maximum attacker_score from a single ip. Thanks 
Hello, How to calculate sum of a field based on other distinct field? For example: How to find sum for score of distinct vulnerability (exclude 0) group by ip?  Thank you so much Before calcu... See more...
Hello, How to calculate sum of a field based on other distinct field? For example: How to find sum for score of distinct vulnerability (exclude 0) group by ip?  Thank you so much Before calculation ip vuln score 1.1.1.1 vuln1 0 1.1.1.1 vuln1 0 1.1.1.1 vuln2 3 1.1.1.1 vuln2 3 1.1.1.1 vuln2 3 1.1.1.1 vuln3 7 1.1.1.1 vuln3 7 2.2.2.2 vuln1 0 2.2.2.2 vuln4 0 2.2.2.2 vuln5 5 2.2.2.2 vuln5 5 After calculation 1.1.1.1:   sum  (vuln 2 [score]) + sum(vuln 3 [score])  = 3 + 7 = 10 2.2.2.2  : sum (vuln 5 [score]) = 5  ip sum (score of distinct vuln) 1.1.1.1 10 2.2.2.2 5
Hi All .. is there a way to add dimensions to the URL that posts into the "Search Entity Dimensions" search in the ITSI Infrastructure Overview dashboard? eg : "service_name" and "department" Ideall... See more...
Hi All .. is there a way to add dimensions to the URL that posts into the "Search Entity Dimensions" search in the ITSI Infrastructure Overview dashboard? eg : "service_name" and "department" Ideally want to link from a dashboard into a Focused by dimensions list of entities for example: https://itsi-blah.splunkcloud.com/en-GB/app/itsi/entity_overview?countPerPage=20&earlie[…]w&page=1&refreshInterval=-1&sortTypeBy=entities_count... & <my dimensions>   
It worked.. Thank you so much for your help...   I accepted your solution I wish there were other way to hide the field though..    let me know if there is..  thank you!!