All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi - Is there a way to get 2 nonstream Searches to run in parallel in the same SPL? I am using "appendcols", but I think one is waiting for the other to finish. I can't use multisearch as I don't... See more...
Hi - Is there a way to get 2 nonstream Searches to run in parallel in the same SPL? I am using "appendcols", but I think one is waiting for the other to finish. I can't use multisearch as I don't have stream commands. The issue is displaying the license used by Splunk and I want to run 2 SPL in parallel. However, it's very slow to run if I run 2 in sequence. Thanks in advance for any help   index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | search pool = "*" | search idx != "mlc_log_drop" | timechart span=1d sum(b) AS Live_Data fixedrange=false | fields - _timediff | foreach * [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] | appendcols [ search index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | search pool = "*" | search idx = "mlc_log_drop" | timechart span=1d sum(b) AS Log_Drop_Data fixedrange=false | fields - _timediff | foreach * [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]]      
No, unfortunately I didn't get it solved. But I didn't spend any more time on the problem.
Hi I have splunk Enterprise environment. After doing SAML Configuration via frontend it's not redirecting to portal after authentication. What can be the reason?  
Hi, I'm running the curl command:   curl -vvvvv https://prd-p-xxxxx.splunkcloud.com:8088/services/collector/event -H "Authorization: Splunk <token>" -d '{"sourcetype": "my_sample_data", "event": "p... See more...
Hi, I'm running the curl command:   curl -vvvvv https://prd-p-xxxxx.splunkcloud.com:8088/services/collector/event -H "Authorization: Splunk <token>" -d '{"sourcetype": "my_sample_data", "event": "ping"}'   and I got: * Trying <IP>:8088... * connect to <IP> port 8088 failed: Operation timed out * Failed to connect to prd-p-xxxxx.splunkcloud.com port 8088 after 17497 ms: Couldn't connect to server * Closing connection 0 curl: (28) Failed to connect to prd-p-xxxxx.splunkcloud.com port 8088 after 17497 ms: Couldn't connect to server I have free trial, HEC is enabled and token is valid what could cause this problem?
You are correct, mvexpand of a values() or list() field will duplicate the event. If you want to count by ErrorCode separately, include ErrorCode in your by clause of the stats command.
Assuming your times will actually be 24-hour clock times (and poor Roger and Novak aren't on 21.5 hour shifts!), you could do something like this | makeresults format=csv data="Start time,End time,T... See more...
Assuming your times will actually be 24-hour clock times (and poor Roger and Novak aren't on 21.5 hour shifts!), you could do something like this | makeresults format=csv data="Start time,End time,Team,Employee Name,Available 8:00,17:30,Team A,Roger,Y 17:30,8:00,Team A,Federer,Y 8:00,17:30,Team B,Novak,Y 17:30,7:00,Team B,Djokovic,Y" ``` The lines above create some simulated data based on your example ``` ``` Convert start and end times to minutes of the day (assuming times are strings) ``` | eval start=60*tonumber(mvindex(split('Start time',":"),0))+tonumber(mvindex(split('Start time',":"),1)) | eval end=60*tonumber(mvindex(split('End time',":"),0))+tonumber(mvindex(split('End time',":"),1)) ``` Determine how many days the shift is part of ``` | eval days=if(start < end,1,2) ``` Duplicate the event for multiple days ``` | eval day=mvrange(0,days) | mvexpand day ``` Adjust start minute if second day ``` | eval start=if(days<2,start,if(day==1,0,start)) ``` Adjust end minute if first day ``` | eval end=if(days<2, end,if(day==0,24*60,end)) ``` Determine minutes covered by shift pattern ``` | eval minutes=mvrange(start,end) | stats dc(minutes) as cover by Team ``` Find which teams do not have every minute covered ``` | where cover < 24*60 Depending on how your shift times are defined, you may be able to adjust this to use 30 minute spans (as suggested by your example), but the principle is the same.
Hello, we got following error by setting up AbuseIPDB Api Key setup Page: (Splunk Version 9.0.6)   Is there another way to put in the api key     chears     
Your example seems to change the underscore to a hyphen (I have assumed that this is a typo). Also, your criteria is not very precise, so I have assumed that you mean not an underscore, followed by a... See more...
Your example seems to change the underscore to a hyphen (I have assumed that this is a typo). Also, your criteria is not very precise, so I have assumed that you mean not an underscore, followed by an underscore, followed by not an underscore somewhere in the name. | eval APP=if(match(name,"[^_]_[^_]"),name,null()) | eval Host=if(match(name,"[^_]_[^_]"),null(),name)  You may need to adjust the match expression if the criteria I have  used is not what you meant. 
I can't make it work. I found some explanation here: https://community.splunk.com/t5/Getting-Data-In/How-to-replace-characters-in-logs-using-SEDCMD-in-props-conf-in/m-p/392306 but they said the ch... See more...
I can't make it work. I found some explanation here: https://community.splunk.com/t5/Getting-Data-In/How-to-replace-characters-in-logs-using-SEDCMD-in-props-conf-in/m-p/392306 but they said the change should be made in HF props.conf I need to make it work on UF for Splunk Cloud
After some help. Is there any way to get this to use a custom port for the 2 server that use a non 443 port? | makeresults | eval dest="url1,url2,url3", dest = split (dest,",") | mvexpand dest | ... See more...
After some help. Is there any way to get this to use a custom port for the 2 server that use a non 443 port? | makeresults | eval dest="url1,url2,url3", dest = split (dest,",") | mvexpand dest | lookup sslcert_lookup dest OUTPUT ssl_subject_common_name ssl_subject_alt_name ssl_end_time ssl_validity_window | eval ssl_subject_alt_name = split(ssl_subject_alt_name,"|") | eval days_left = round(ssl_validity_window/86400) | table ssl_subject_common_name ssl_subject_alt_name days_left ssl_issuer_common_name | sort days_left   I tried adding the port to the first eval e.g. | eval dest="url1,url2,url3",  dest_port=8443 , dest = split (dest,",")   Would be great if both the standard and custom could be returned together.
Hi _JP   Thanks for the response. Yes, the instance is an indexer.  I have read the linked documents and I understand more of the detail about the indexer and how it stores various stages of data. ... See more...
Hi _JP   Thanks for the response. Yes, the instance is an indexer.  I have read the linked documents and I understand more of the detail about the indexer and how it stores various stages of data. I'll review and in the meantime we will be adding additional FS space.   Thank You
Hi @gcusello , When do logs arrive in Splunk after a user has performed an activity
@Splunkerninja there are many way to achieve this, for example like below: | makeresults | eval name="ft_name_1" | eval underscorematch=if(match(name,".\_."),"Yes","No") | eval name_value=if(unde... See more...
@Splunkerninja there are many way to achieve this, for example like below: | makeresults | eval name="ft_name_1" | eval underscorematch=if(match(name,".\_."),"Yes","No") | eval name_value=if(underscorematch="Yes",name,"NA") | table name underscorematch name_value
Looks like the dynatrace tenant is not comlete. There is missing the part after the tenant: /e/<your_envirohnment>
Hi @AL3Z , could you better describe your question? logs are indexed when they arrive in Splunk. Are you speaking of Splunk Enterprise or Enterprise Security? Anyway, when an alert is triggered t... See more...
Hi @AL3Z , could you better describe your question? logs are indexed when they arrive in Splunk. Are you speaking of Splunk Enterprise or Enterprise Security? Anyway, when an alert is triggered the alert is written in an indexor in the triggered alerts list or in a lookup depending on the action the you defined for that alert. Ciao. Giuseppe
Hi, I am checking for underscore in field values and if it present then capture that value. For Example: if name has underscore in it then value should get assigned to APP field and if it does not ... See more...
Hi, I am checking for underscore in field values and if it present then capture that value. For Example: if name has underscore in it then value should get assigned to APP field and if it does not have underscore in it then value should get assigned to Host field name         APP           Host ftr_score ftr-score  NA terabyte   NA              terabyte I have tried using case and like statement but it does not work as expected  
Hi, I'm curious to know when the logs will be indexed after the incident triggered in Splunk. Thanks  
@meekahHope you've figured out on how to ingest salesforce logs, but just in case please try to Turn Off "Require Proof Key for Code Exchange (PKCE) Extension for Supported Authorization Flows" check... See more...
@meekahHope you've figured out on how to ingest salesforce logs, but just in case please try to Turn Off "Require Proof Key for Code Exchange (PKCE) Extension for Supported Authorization Flows" checkbox in your Connected App's settings. It is located under the Scopes and Callback URLs (at least in old UI).   Have found different topics with same problem and same resolution
I have a requirement to check if a employee shift roster(lookup in Splunk) covers 24 hours in a day for each team. If it doesn't cover, I need to send out an alert to the respective team notifying th... See more...
I have a requirement to check if a employee shift roster(lookup in Splunk) covers 24 hours in a day for each team. If it doesn't cover, I need to send out an alert to the respective team notifying them that their respective shift roster is not configured properly. Can anybody help me out as to how I can proceed in this. The employee_shift_roster.csv looks something like this: Start time End time Team Employee Name Available 8:00 5:30 Team A Roger Y 5:30 8:00 Team A Federer Y 8:00 5:30 Team B Novak Y 5:30 7:00 Team B Djokovic Y   Now the alert should go out to Team B stating that their shift roster is not configured properly because 24 hours are not cover in shift. Thanks in advance for the help
Hello, I am using Apache Tomcat for my application. Using AppDynamics console, I downloaded the Java Agent for my application. After adding the agent path under setenv.bat for Apache Tomcat and runn... See more...
Hello, I am using Apache Tomcat for my application. Using AppDynamics console, I downloaded the Java Agent for my application. After adding the agent path under setenv.bat for Apache Tomcat and running the server. I do get a notification saying "Started AppDynamics Java Agent Successfully". However, when I navigate to Applications tab in AppDynamics console, I don't see any metrics, also under application Agents, I dont see any agent registered. I verified the controller-info.xml file for the agent and it consists all the parameters to send details to my instance. But the metrics are not reported. Please help.