All Topics

Top

All Topics

Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want... See more...
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x  | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It cant get ip-dst from misp instance , Can anyone help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in parti... See more...
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in particular is having me confused: Is the field PROCESSING_TIME expressed in seconds or in milliseconds? How can I understand what the PROCESSING_TIME corresponds to (is this time for an IDOC to be read by SAP before being sent outbound? or does it have another particular meaning?). Kind regards
Hi, I have have a list of events that contain a customer ID. I'm trying to detect when I have a sequence of events with incremental changes to the ID Example: - event A - ID0 - event B - ID1 - e... See more...
Hi, I have have a list of events that contain a customer ID. I'm trying to detect when I have a sequence of events with incremental changes to the ID Example: - event A - ID0 - event B - ID1 - event C- ID2 - event D - ID3   I might have other events between these increments that could have unrelated IDs (i.e: event A ID0 - event H ID 22, event B ID1) I've tried using | streamstats current=f last(CustomerID) as prev_CustomerID | eval increment = CustomerID - prev_CustomerID but without any luck.   Do you guys know a way this could be achieved ?            
Hello everyone, I m using Splunk DB connect to get data from DB, I get three values from the DB as follow :  - Event's ID - Json Data - creation data of events There is the result, how can I... See more...
Hello everyone, I m using Splunk DB connect to get data from DB, I get three values from the DB as follow :  - Event's ID - Json Data - creation data of events There is the result, how can I remove the "rawjson=" to ba able to get this data on json format ?     Regards,
Hi Community,   The sslPassword in Seach Head $SPLUNK_HOME/etc/system/local/web.conf not being hashed. Other .conf password like server.conf & authentication.conf in $SPLUNK_HOME/etc/system/local/... See more...
Hi Community,   The sslPassword in Seach Head $SPLUNK_HOME/etc/system/local/web.conf not being hashed. Other .conf password like server.conf & authentication.conf in $SPLUNK_HOME/etc/system/local/ are hashed. I have changed the password recently in web.conf recently.   Anyone have any idea?
Hello Experts, I'm trying to work out how to strip down a field  field="blah_6chars_blah_blah" the 6chars is what I want to extract and the 6 chars are always prefixed with 999. the 6 chars prefi... See more...
Hello Experts, I'm trying to work out how to strip down a field  field="blah_6chars_blah_blah" the 6chars is what I want to extract and the 6 chars are always prefixed with 999. the 6 chars prefixed with 999 might be in a different place in the field.  i.e.  blah_blah_6chars_blah 6chars example value=999aaa so the regex should find  all occurences of 999 in the field and extract the 999 and the next 3 chars and create an additional field with the result Thanks
I am trying find a way where I can send a test email through SOAR to check the connectivity. Where can I see the option? Splunk SMTP app is also present.
Hi Community,   One of the log source (e.g. index=my_index) at my company's splunk became inter=main. After multiple investigation, i found that Infrastructure Team has refreshed the device to a ne... See more...
Hi Community,   One of the log source (e.g. index=my_index) at my company's splunk became inter=main. After multiple investigation, i found that Infrastructure Team has refreshed the device to a new hardware due to product EOL (same brand, same product, e.g. Palo Alto 3020 to PA3220). Also, the device IP is changed. Thus, i have modified the monitoring path at inputs.conf in Add-on and distribute to HF by deployment server.   Here is the example for what i modified:   [monitor:///siem/data/syslog/192.168.1.101/*] #original ip was 192.168.1.100  disabled = false  index = my_index sourcetype = my:sourcetype host_segment = 4   After such changes, i tried to verify the result on HF, the inputs.conf was successfully update to the new version.    However, the logs remain to index=main when searching on Search Head after the changes i did above.   Anyone know if any other thing i need to modify? Or else there are other root cause that making the logs fall under wrong index apart from the ip changes?  
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, tha... See more...
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, that didn't work. There're no buckets in excess buckets, i've cleared them like more than 3hrs. Is there any way to meet SF & RF without loosing data or bucket ? I even tried to restart Splunk process on that Indexer Forgot to mention, i had a /opt/cold drive that has I/O error on an indexer. To get it fix i had stop Splunk and remove an indexer from indexer cluster, All other indexers are up and running since last night.  All 45 indexers in cluster-master are up and running and left it to bucket fixup tasks to fix and it also to rebalance overnight. When i check morning there're only 2 fixup tasks left one is in SF & one in RF.  Does it also need manual data rebalance to perform from indexer-cluster as well ?
I found "VersionControl For Splunk" on Github would this add-on work for gitlab as well?
Hello All, I have data in the form of a table with two fields: index, sourcetype. Each row has unique pair of values for the two fields. I need your guidance to compute and publish the forecast val... See more...
Hello All, I have data in the form of a table with two fields: index, sourcetype. Each row has unique pair of values for the two fields. I need your guidance to compute and publish the forecast value for number of events  next day based on historical data fetched for each row on the basis of corresponding index and sourcetype. Any inputs and guidance will be very helpful. Thank you Taruchit
Want to compare Dynatrace results (Total calls & Avg/90% responses times) for current week Vs Last week. And need to display the differences.  Sample Query:  index="dynatrace" sourcetype="dynatra... See more...
Want to compare Dynatrace results (Total calls & Avg/90% responses times) for current week Vs Last week. And need to display the differences.  Sample Query:  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_action_user path=userId | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="xxxxxxx" | spath output=pp_key_user_action input=user_actions path=keyUserAction | where pp_key_user_action="true"| spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_User_Action_Response" by pp_user_action_name | stats count(pp_user_action_response) As "Total_Calls",perc90(pp_user_action_response) AS "Perc90_User_Action_Response" by pp_user_action_name Avg_User_Action_Response | eval Perc90_User_Action_Response=round(Perc90_User_Action_Response,0)/1000| eval Avg_User_Action_Response=round(Avg_User_Action_Response,0)/1000 | table pp_user_action_name,Total_Calls,Avg_User_Action_Response,Perc90_User_Action_Response | sort -Perc90_User_Action_Response
 Use AppDynamics’ built-in machine learning capabilities to conduct fast and effective root cause analysis (RCA)  CONTENTS | Introduction | Video |Resources | About the presenter  Video Length:... See more...
 Use AppDynamics’ built-in machine learning capabilities to conduct fast and effective root cause analysis (RCA)  CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 2 min 16 seconds  The root cause of a performance transaction can be complex to troubleshoot, but it does not have to be. By using AppDynamics’s built-in machine learning capabilities, we can quickly identify Health Rule violations triggered by transaction response times deviating from their baseline and then combine those with diagnostic capabilities that get us to the specific cause. We are able to drill down into the relevant snapshots to see which method and specific line of code is to blame.   In just a few moments, we demonstrate how to data mine the root cause of a performance issue by leveraging the AppDynamics AI engine to gain insights into the exact line of code causing problems.     Additional Resources  Learn more about Anomaly Detection in the documentation. About presenter Vivek Inbakumar  Vivek Imbakumar, Sales Engineer Vivek Inbakumar joined AppDynamics as a Sales Engineer in 2021. He got into the world of IT upon graduating as a computer engineer and subsequently completed his executive MBA from the University of London.  With his passion for staying close to customers combined with strong technical knowledge - sales engineering offered the best of both worlds. Based out of the Cisco Dubai office, he helps customers across the Middle East and Africa to improve their application monitoring practices.   He is also a vital member of the Cloud Native Application Observability Champions team. Here, he extends his knowledge to empower both colleagues and customers, demonstrating how AppDynamics' cutting-edge monitoring tools can revolutionize their approach to full-stack observability.  Vivek is driven by a core belief in the power of technology to transform businesses and improve lives. His commitment to understanding and meeting the unique needs of each customer sets him apart as a dedicated advocate for client success. 
Hi! I am wondering whether there are any advantage to use token over username and passphrase/password when accessing REST API to a dedicated API user (whose access to credential is the same as acces... See more...
Hi! I am wondering whether there are any advantage to use token over username and passphrase/password when accessing REST API to a dedicated API user (whose access to credential is the same as access to the token). In our practices for automated API calls, we like to provide enough resources for the API call but nothing more, and therefore we ended up creating a dedicated functional user in our instance in Splunk Cloud for each function a program or a group of closely related programs need to do through API calls. Consequently, in almost all the cases there is only a single token needed per functional user. Everyone that need to maintain those programs for that function will have access to both the credential of that functional user and that single token, as those team member need to login as the user to test their queries. Therefore there are no isolation of access between token and user credential. So in this case, are there any advantage to create and use the token over just using username and passphrase/password? Thank you!
I have a response that looks like this:   {"meta":{"code":400},"flag1":false,"flag2":false,"flag3":true}   There are more than 3 flags, but this is an example. Assuming that there is only one th... See more...
I have a response that looks like this:   {"meta":{"code":400},"flag1":false,"flag2":false,"flag3":true}   There are more than 3 flags, but this is an example. Assuming that there is only one that is true in each response, I want to get a count of which flag is true the most times, in descending order.
We’re happy to announce the release of Mission Control 3.0 which includes several new and exciting features made available to Splunk Enterprise Security Cloud users. The new features, detailed below,... See more...
We’re happy to announce the release of Mission Control 3.0 which includes several new and exciting features made available to Splunk Enterprise Security Cloud users. The new features, detailed below, improve upon your user experience in Mission Control by helping you:  Simplify incident aggregation by consolidating related incidents for easier investigations with full context Seamlessly create incidents manually and add summary fields with multiple values Quickly access visualizations to better identify risk with a risk event timeline This release continues adding capabilities that unify security operations for Splunk users. We are bringing together your threat detection, investigation and response workflows into a common work surface. As a reminder, Mission Control is available for Splunk Cloud Enterprise Security Customers who are eligible for Mission Control today and it’s quick and simple to enable. Below are more details and screenshots of our latest release and you can find more details like release notes on our documentation site.  Simpler Incident Aggregation to Streamline Processes By merging related incidents using a parent/child view, you can minimize duplicate efforts, ensure accurate prioritization, and seamlessly inherit key attributes for streamlined processing. This feature is being released in a Preview version only.  Seamlessly Create Incidents Manually and Add Summary Fields with Multiple Values You now have the ability to add summary field information in the Mission Control UI. Many times you need to create incidents manually and set owner, status or other fields. Creating these incidents manually requires adding custom fields like “username”, which is now enabled in the UI with a new + button to add summary fields.  Quickly Access Visualizations To Better Identify Risk With A Risk Event Timeline Visualizations related to Risk Notables, like the interactive Risk Event Timeline, as well as the MITRE ATT&CK Matrix visualization, are quickly accessible within incidents in Mission Control.  For more information on Mission Control such as a product tour, demo videos, blog posts, white papers, and webinars, please visit our home page for Splunk Mission Control. 
We're using this query to retrieve metrics on our hosts:   index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connect... See more...
We're using this query to retrieve metrics on our hosts:   index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connectionType as connectType | eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder") | eval version=if(isnull(version),"pre 4.2",version) | rename version as Ver | dedup sourceIp | table connectType, sourceIp, sourceHost, Ver   This gives us everything we need, except for what indexes these hosts are sending data to. I'm aware of this query to retrieve the indexes and the hosts that are sending data to them:   |tstats values(host) where index=* by index     How can I combine the two, either with a join or a sub search where in the table output, we have a column for index, which would give us a list of indexes the hosts are sending to?    
I am trying to save a lookup file  in the Splunk App for lookup file editing and I get the error: The lookup file could not be saved. How do I resolve this?
Hello, I was given the administration of a Splunk Enterprise Security and I am not familiarized, I have always used manual queries from "Search & reporting" I have the knowledge level of Fundament... See more...
Hello, I was given the administration of a Splunk Enterprise Security and I am not familiarized, I have always used manual queries from "Search & reporting" I have the knowledge level of Fundamentals 1 and 2. Splunk ES currently works and I can see noticeable events from paloalto firewalls but recently configured fortinet logs and these are already coming in under an index called fortinet, when doing a normal query with index=fortinet I can see events but I see nothing from Splunk ES. Exactly what do I need to do to get the fortinet events to be taken into account by Splunk ES and start logging notable events?
It looks like the StepControlWizard is deprecated with Splunk version 9.1.1. We are guessing the control must have been using a previous version of jQuery 3.5. Not sure.  We found the control was mov... See more...
It looks like the StepControlWizard is deprecated with Splunk version 9.1.1. We are guessing the control must have been using a previous version of jQuery 3.5. Not sure.  We found the control was moved to the quarantine folder. Is there a plan to replace this control ?