All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to be able to limit the 'All' option to what my query actually returns and not for * entries for targetAppAlternateId.   <form theme="dark"> <label>Logins</label> <fieldset submi... See more...
I would like to be able to limit the 'All' option to what my query actually returns and not for * entries for targetAppAlternateId.   <form theme="dark"> <label>Logins</label> <fieldset submitButton="false"> <input type="dropdown" token="myApp"> <label>Application:</label> <fieldForLabel>targetAppAlternateId</fieldForLabel> <fieldForValue>targetAppAlternateId</fieldForValue> <search> <query>index=myIndex targetAppAlternateId="App1.*" OR targetAppAlternateId="App2" | dedup targetAppAlternateId | sort by targetAppAlternateId</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <choice value="*">All</choice> </input>   Any help would be greatly appreciated. 
Hi, I am new to splunk. Currently using this query to get the count index=* SrcCountry=* | stats count by SrcCountry.  If I wanted to narrow down the search to one country and get a week by wee... See more...
Hi, I am new to splunk. Currently using this query to get the count index=* SrcCountry=* | stats count by SrcCountry.  If I wanted to narrow down the search to one country and get a week by week comparison of the count what kind of query could I use?
Sounds easy, eh? I've been using Splunk since v3 -- and I've setup forwarding for servers dozens of times, and migrated countless indexes, but this one is kicking my butt. I have a stand-alone Splu... See more...
Sounds easy, eh? I've been using Splunk since v3 -- and I've setup forwarding for servers dozens of times, and migrated countless indexes, but this one is kicking my butt. I have a stand-alone Splunk server (Enterprise) that's been ingesting data for years in the form of CSV files and providing a front end for analysts. I need to decommission that box and get the data into our main cluster. I setup forwarding from the stand-alone server to feed into a heavy forwarder (that has a thousand other hosts feeding into it) and then into the cluster. It's working insomuch as it forwarded data but only from the last CSV file (back to March 17th, FWIW). I can't simply copy the files into a new index because of the cluster, and I no longer have the previous CSV files to re-ingest (going back to 2009). I've tried clearing the fishbucket hoping to force it to resend everything it knows. It's feeding into an index of the same name. No errors in splunkd.log... Thoughts? Thanks! Michael
Gentlemen My raw events have a field called login_time which has values of format ( 2022-04-11 10:52:08 ) .  This is the time an user logs in to the system.  There is no logout_time field in raw d... See more...
Gentlemen My raw events have a field called login_time which has values of format ( 2022-04-11 10:52:08 ) .  This is the time an user logs in to the system.  There is no logout_time field in raw data.  Now, the requirement is to track all activities done by the user starting from login_time and ending with login_time + 8 hours.  1)   How do i add this 8 hours to the login_time in my search ? Do i create an eval function something like eval logout_time = login time + 8:00:00 ?  2) Transaction works with strings in startswith and endswith.  Can it be used to track time which is in  numerical format  as  shown in below query ?    If not, how else to group all events done by the user within the login and logout time ?         index=xxxx transaction startswith ="2022-04-11 10:52:08" endswith="2022-04-11 10:52:08 + 8 hrs" | stats .... by user         Hope i am clear
Hello. Good afternoon.  Looking for some best practices here.  Over the years, we have been using the UF to ingest Windows data.  This is a reliable solution for ensuring the Windows events are bei... See more...
Hello. Good afternoon.  Looking for some best practices here.  Over the years, we have been using the UF to ingest Windows data.  This is a reliable solution for ensuring the Windows events are being ingested into Splunk indexes.  We now have a new solution called Crowdstrike which seems to also ingest Windows events as well.  Based on the experience from the Splunk Community, can anyone share their experiences (or best practices)?  We would like to have a reliable Windows solution but refrain from having duplicate data. Regards, Max  
Hello, I'm trying to find a way to fetch/get the HEC host and port of Splunk instance using Javascript SDK in the frontent, but I could not find any source of information that allows me to do such th... See more...
Hello, I'm trying to find a way to fetch/get the HEC host and port of Splunk instance using Javascript SDK in the frontent, but I could not find any source of information that allows me to do such thing...   Any one to help?   Thanks
Hi, Does AppD support PowerBuilder ^ Post edited by @Ryan.Paredez to split the post into a new conversation and improve the title for Searchability. 
I have 2 searches and I want to link 2 together in one table. The first search:   index=very_big_index caseNumber=1234567799 | table _time Name caseNumber UID phone.   This displays the followi... See more...
I have 2 searches and I want to link 2 together in one table. The first search:   index=very_big_index caseNumber=1234567799 | table _time Name caseNumber UID phone.   This displays the following as expected, but the phone field is blank: _time Name caseNumber UID phone 11APR2022 John Smith 1234567799 111222333444555666777     The second search with the UID yields the phone number but nothing else:   index=very_big_index 111222333444555666777 | stats values(phone) as phone   results: phone 123-555-1234   How can I efficiently link these 2 searches together using the common field name/value of UID/111222333444555666777
Hi,  I habe three panels in one row. Since Panel A and B have less information and are 'think', it looks weird together with the 'thick' panel C in a row. I would like to stack A and B Panels toget... See more...
Hi,  I habe three panels in one row. Since Panel A and B have less information and are 'think', it looks weird together with the 'thick' panel C in a row. I would like to stack A and B Panels together and then put them next to C. How shall I realize that? Many thanks for help!    
Hi All, have generated Azure AD SAML XML and certificate using Splunk Blog:  https://www.splunk.com/en_us/blog/tips-and-tricks/configuring-microsoft-s-azure-security-assertion-markup-language-sam... See more...
Hi All, have generated Azure AD SAML XML and certificate using Splunk Blog:  https://www.splunk.com/en_us/blog/tips-and-tricks/configuring-microsoft-s-azure-security-assertion-markup-language-saml-single-sign-on-sso-with-splunk-cloud-azure-portal.html    After loading up XML in a totally new instance, it gives the below error:   Verification of SAML assertion using the IDP's certificate provided failed. Error: failed to verify signature with cert    In Azure portal can see the certificate is active:     not sure where to look further..... any leads here...... @tkomatsubara_sp @richgalloway @tshah-splunk 
Hello, i have a customer that wants to create a Search Head Cluster. He has deployed me 4 Search Heads and 2 Search Deployer. Customer idea is to have 2 Search Deployer, one acting as master an... See more...
Hello, i have a customer that wants to create a Search Head Cluster. He has deployed me 4 Search Heads and 2 Search Deployer. Customer idea is to have 2 Search Deployer, one acting as master and one acting as backup server that replaces master in case of its failure. It's possible? Thanks a lot
Handy search for a dashboard earliest=-90d@d `notable` | eval isSuppressed=if(match(eventtype,"Suppression"),1,0) | stats count(eval(like(urgency,"informational"))) as informational_count count(eva... See more...
Handy search for a dashboard earliest=-90d@d `notable` | eval isSuppressed=if(match(eventtype,"Suppression"),1,0) | stats count(eval(like(urgency,"informational"))) as informational_count count(eval(like(urgency,"low"))) as low_count count(eval(like(urgency,"medium"))) as medium_count count(eval(like(urgency,"high"))) as high_count count(eval(like(urgency,"critical"))) as critical_count, sum(isSuppressed) as suppression_count, sparkline(count) as activity by rule_name | join rule_name [| rest splunk_server=local count=0 /services/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rename action.correlationsearch.label as rule_name action.risk.param._risk as risk_json | eval status = if(disabled=="1","disabled","enabled") | fields rule_name status ] | search status!=disabled | eval informational_count = if(isnull(informational_count),0,informational_count), low_count = if(isnull(low_count),0,low_count), medium_count = if(isnull(medium_count),0,medium_count), high_count = if(isnull(high_count),0,high_count), critical_count = if(isnull(critical_count),0,critical_count) , suppression_count = if(isnull(suppression_count),0,suppression_count) | fields rule_name activity suppression_count informational_count low_count medium_count high_count critical_count | addtotals critical_count high_count medium_count low_count informational_count | sort - Total critical_count high_count medium_count low_count informational_count | rename Total as total_reported
Hello, Presently my hot/warm index occupies 50GB on disk (there are no limits specified in indexes.conf). I'd like to move it to a faster volume of 10GB size. What would be the correct steps to ach... See more...
Hello, Presently my hot/warm index occupies 50GB on disk (there are no limits specified in indexes.conf). I'd like to move it to a faster volume of 10GB size. What would be the correct steps to achieve this? For example: - specify  maxDataSizeMB = 10000 - restart splunk (will it shrink the hot index and move the rest of it to the cold path)? - add new volume, manually move hot index files to new location Thanks Andrei        
How to subtract Total Amount to  WithdrawRequest to total Amount of  DepositRequest  Result=WithdrawRequest-DepositRequest   
Hi We have a dashboard that is getting this error. I am on 8.1.9 the  Unknown sid. might stay there for 2 minutes but multiple refresh might have happened - i cant manually refresh it and in a de... See more...
Hi We have a dashboard that is getting this error. I am on 8.1.9 the  Unknown sid. might stay there for 2 minutes but multiple refresh might have happened - i cant manually refresh it and in a demo it looks really really bad. Any ideas of what it is and how i can make it stop happening.    
Hello I've defined root_endpoint = /splunk in web.conf file. But now I'm getting 404 on /splunk/en-US/static/* files. What did I do wrong? Regards Nicolas
One server has splunk service failling and it seems splunk-winevtlog.exe is not started. there of two services are up and one is alwasys down and not started. reinstalled agent but still didnot h... See more...
One server has splunk service failling and it seems splunk-winevtlog.exe is not started. there of two services are up and one is alwasys down and not started. reinstalled agent but still didnot help.   SplunkForwarder Service Windows Service Monitor Up Up splunkd.exe Process Monitor - Windows Up Down splunk-winevtlog.exe Process Monitor - Windows Down  
Content mapping is not working correctly in Security Essentials Apps after the version upgrade to 3.5.0. We have upgraded to 3.5.1 still it is not working as expected   Local Saved searches name  i... See more...
Content mapping is not working correctly in Security Essentials Apps after the version upgrade to 3.5.0. We have upgraded to 3.5.1 still it is not working as expected   Local Saved searches name  in Security Essentials app are not as per the correlation search name in ES Splunk Custom Content is getting piled up with all enabled correlation searches when we update the Content Introspection Overall performance of this page is also very very slow.
Hi, I have an index of log events and I have been asked to exclude all events with a certain string in it. The String I need to omit is drminprtmgmt.isus.emc.com. This string (which represents a de... See more...
Hi, I have an index of log events and I have been asked to exclude all events with a certain string in it. The String I need to omit is drminprtmgmt.isus.emc.com. This string (which represents a device) is not mapped to any field currently. How can I filter all events to exclude this string? This is currently what I have (which does NOT work):   Many thanks, Patrick
Hi I have read that  parallelIngestionPipelines  is not working in 8.1, however, that post was old, so I am not sure if it was fixed in 8.1.9. I also read it was fixed in 8.2.3 onwards. Regards ... See more...
Hi I have read that  parallelIngestionPipelines  is not working in 8.1, however, that post was old, so I am not sure if it was fixed in 8.1.9. I also read it was fixed in 8.2.3 onwards. Regards Rob