All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysq... See more...
Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysql returned with error code [127]: /home/appd-team-9/appdynamics/platform/product/controller/db/bin/mysql: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory and stderr .
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?... See more...
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?   thanks!!
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want... See more...
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x  | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It cant get ip-dst from misp instance , Can anyone help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in parti... See more...
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in particular is having me confused: Is the field PROCESSING_TIME expressed in seconds or in milliseconds? How can I understand what the PROCESSING_TIME corresponds to (is this time for an IDOC to be read by SAP before being sent outbound? or does it have another particular meaning?). Kind regards
Hi, I have have a list of events that contain a customer ID. I'm trying to detect when I have a sequence of events with incremental changes to the ID Example: - event A - ID0 - event B - ID1 - e... See more...
Hi, I have have a list of events that contain a customer ID. I'm trying to detect when I have a sequence of events with incremental changes to the ID Example: - event A - ID0 - event B - ID1 - event C- ID2 - event D - ID3   I might have other events between these increments that could have unrelated IDs (i.e: event A ID0 - event H ID 22, event B ID1) I've tried using | streamstats current=f last(CustomerID) as prev_CustomerID | eval increment = CustomerID - prev_CustomerID but without any luck.   Do you guys know a way this could be achieved ?            
Hello everyone, I m using Splunk DB connect to get data from DB, I get three values from the DB as follow :  - Event's ID - Json Data - creation data of events There is the result, how can I... See more...
Hello everyone, I m using Splunk DB connect to get data from DB, I get three values from the DB as follow :  - Event's ID - Json Data - creation data of events There is the result, how can I remove the "rawjson=" to ba able to get this data on json format ?     Regards,
Hi Community,   The sslPassword in Seach Head $SPLUNK_HOME/etc/system/local/web.conf not being hashed. Other .conf password like server.conf & authentication.conf in $SPLUNK_HOME/etc/system/local/... See more...
Hi Community,   The sslPassword in Seach Head $SPLUNK_HOME/etc/system/local/web.conf not being hashed. Other .conf password like server.conf & authentication.conf in $SPLUNK_HOME/etc/system/local/ are hashed. I have changed the password recently in web.conf recently.   Anyone have any idea?
Hello Experts, I'm trying to work out how to strip down a field  field="blah_6chars_blah_blah" the 6chars is what I want to extract and the 6 chars are always prefixed with 999. the 6 chars prefi... See more...
Hello Experts, I'm trying to work out how to strip down a field  field="blah_6chars_blah_blah" the 6chars is what I want to extract and the 6 chars are always prefixed with 999. the 6 chars prefixed with 999 might be in a different place in the field.  i.e.  blah_blah_6chars_blah 6chars example value=999aaa so the regex should find  all occurences of 999 in the field and extract the 999 and the next 3 chars and create an additional field with the result Thanks
I am trying find a way where I can send a test email through SOAR to check the connectivity. Where can I see the option? Splunk SMTP app is also present.
Hi Community,   One of the log source (e.g. index=my_index) at my company's splunk became inter=main. After multiple investigation, i found that Infrastructure Team has refreshed the device to a ne... See more...
Hi Community,   One of the log source (e.g. index=my_index) at my company's splunk became inter=main. After multiple investigation, i found that Infrastructure Team has refreshed the device to a new hardware due to product EOL (same brand, same product, e.g. Palo Alto 3020 to PA3220). Also, the device IP is changed. Thus, i have modified the monitoring path at inputs.conf in Add-on and distribute to HF by deployment server.   Here is the example for what i modified:   [monitor:///siem/data/syslog/192.168.1.101/*] #original ip was 192.168.1.100  disabled = false  index = my_index sourcetype = my:sourcetype host_segment = 4   After such changes, i tried to verify the result on HF, the inputs.conf was successfully update to the new version.    However, the logs remain to index=main when searching on Search Head after the changes i did above.   Anyone know if any other thing i need to modify? Or else there are other root cause that making the logs fall under wrong index apart from the ip changes?  
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, tha... See more...
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, that didn't work. There're no buckets in excess buckets, i've cleared them like more than 3hrs. Is there any way to meet SF & RF without loosing data or bucket ? I even tried to restart Splunk process on that Indexer Forgot to mention, i had a /opt/cold drive that has I/O error on an indexer. To get it fix i had stop Splunk and remove an indexer from indexer cluster, All other indexers are up and running since last night.  All 45 indexers in cluster-master are up and running and left it to bucket fixup tasks to fix and it also to rebalance overnight. When i check morning there're only 2 fixup tasks left one is in SF & one in RF.  Does it also need manual data rebalance to perform from indexer-cluster as well ?
I found "VersionControl For Splunk" on Github would this add-on work for gitlab as well?
Hello All, I have data in the form of a table with two fields: index, sourcetype. Each row has unique pair of values for the two fields. I need your guidance to compute and publish the forecast val... See more...
Hello All, I have data in the form of a table with two fields: index, sourcetype. Each row has unique pair of values for the two fields. I need your guidance to compute and publish the forecast value for number of events  next day based on historical data fetched for each row on the basis of corresponding index and sourcetype. Any inputs and guidance will be very helpful. Thank you Taruchit
Want to compare Dynatrace results (Total calls & Avg/90% responses times) for current week Vs Last week. And need to display the differences.  Sample Query:  index="dynatrace" sourcetype="dynatra... See more...
Want to compare Dynatrace results (Total calls & Avg/90% responses times) for current week Vs Last week. And need to display the differences.  Sample Query:  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_action_user path=userId | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="xxxxxxx" | spath output=pp_key_user_action input=user_actions path=keyUserAction | where pp_key_user_action="true"| spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_User_Action_Response" by pp_user_action_name | stats count(pp_user_action_response) As "Total_Calls",perc90(pp_user_action_response) AS "Perc90_User_Action_Response" by pp_user_action_name Avg_User_Action_Response | eval Perc90_User_Action_Response=round(Perc90_User_Action_Response,0)/1000| eval Avg_User_Action_Response=round(Avg_User_Action_Response,0)/1000 | table pp_user_action_name,Total_Calls,Avg_User_Action_Response,Perc90_User_Action_Response | sort -Perc90_User_Action_Response
Hi! I am wondering whether there are any advantage to use token over username and passphrase/password when accessing REST API to a dedicated API user (whose access to credential is the same as acces... See more...
Hi! I am wondering whether there are any advantage to use token over username and passphrase/password when accessing REST API to a dedicated API user (whose access to credential is the same as access to the token). In our practices for automated API calls, we like to provide enough resources for the API call but nothing more, and therefore we ended up creating a dedicated functional user in our instance in Splunk Cloud for each function a program or a group of closely related programs need to do through API calls. Consequently, in almost all the cases there is only a single token needed per functional user. Everyone that need to maintain those programs for that function will have access to both the credential of that functional user and that single token, as those team member need to login as the user to test their queries. Therefore there are no isolation of access between token and user credential. So in this case, are there any advantage to create and use the token over just using username and passphrase/password? Thank you!
I have a response that looks like this:   {"meta":{"code":400},"flag1":false,"flag2":false,"flag3":true}   There are more than 3 flags, but this is an example. Assuming that there is only one th... See more...
I have a response that looks like this:   {"meta":{"code":400},"flag1":false,"flag2":false,"flag3":true}   There are more than 3 flags, but this is an example. Assuming that there is only one that is true in each response, I want to get a count of which flag is true the most times, in descending order.
We're using this query to retrieve metrics on our hosts:   index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connect... See more...
We're using this query to retrieve metrics on our hosts:   index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connectionType as connectType | eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder") | eval version=if(isnull(version),"pre 4.2",version) | rename version as Ver | dedup sourceIp | table connectType, sourceIp, sourceHost, Ver   This gives us everything we need, except for what indexes these hosts are sending data to. I'm aware of this query to retrieve the indexes and the hosts that are sending data to them:   |tstats values(host) where index=* by index     How can I combine the two, either with a join or a sub search where in the table output, we have a column for index, which would give us a list of indexes the hosts are sending to?    
I am trying to save a lookup file  in the Splunk App for lookup file editing and I get the error: The lookup file could not be saved. How do I resolve this?
Hello, I was given the administration of a Splunk Enterprise Security and I am not familiarized, I have always used manual queries from "Search & reporting" I have the knowledge level of Fundament... See more...
Hello, I was given the administration of a Splunk Enterprise Security and I am not familiarized, I have always used manual queries from "Search & reporting" I have the knowledge level of Fundamentals 1 and 2. Splunk ES currently works and I can see noticeable events from paloalto firewalls but recently configured fortinet logs and these are already coming in under an index called fortinet, when doing a normal query with index=fortinet I can see events but I see nothing from Splunk ES. Exactly what do I need to do to get the fortinet events to be taken into account by Splunk ES and start logging notable events?
It looks like the StepControlWizard is deprecated with Splunk version 9.1.1. We are guessing the control must have been using a previous version of jQuery 3.5. Not sure.  We found the control was mov... See more...
It looks like the StepControlWizard is deprecated with Splunk version 9.1.1. We are guessing the control must have been using a previous version of jQuery 3.5. Not sure.  We found the control was moved to the quarantine folder. Is there a plan to replace this control ?