All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I... See more...
Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I would like to populate Assets and Identities in ES. Since Cloud instance cannot access the domain, the only way I can think of is using SA-LDAPSearch on Heavy Forwarder. I set it up and successfully connects to LDAP. Question: How can I push the logs and create the lookup tables that will eventually populate the Assets and Identities in ES? Thanks!
Hello!   I have a search table that matches some values and users, like this: is_old_OS_version username true Bob false Marie true Alice   I want to se... See more...
Hello!   I have a search table that matches some values and users, like this: is_old_OS_version username true Bob false Marie true Alice   I want to send alerts to slack only to Bob and Alice and not to Marie. I know that I need a slack application and I have already made it. But how to integrate splunk with this application and chose only to mention the persons I need. Basically I have 2 strategies here: 1. Send to some channel and mention person I need with @ (not the best option, because I will mention lot's of persons with old software in one place) 2. Send directly to the person There are multiple splunk applications that helps integrate with slack, but as I see, I can only choose one channel ID for alert, but I need to dynamically change this ID or find another way.
Hi there, I have trying to use spath to try to extract fields inside a string. Currently, the string has this format..     stringField=["fieldOne": "fieldValue", "fieldTwo": "fieldValue", "fi... See more...
Hi there, I have trying to use spath to try to extract fields inside a string. Currently, the string has this format..     stringField=["fieldOne": "fieldValue", "fieldTwo": "fieldValue", "fieldThree": "fieldValue"]     So, my string inside has some kinda of array with key value pairs. I would to be able to extract those fields and values in a way that I can use their information for my queries.  I would like to be able to get the value of fieldOne by just calling the fieldOne variable/object to get it's value to perform my desire task/stats and so on..  I was trying something like... but no luck!     search... | spath input=stringField search... | eval newVariable=spath(_raw,'stringField') search... | spath search... | spath path=stringField output=newField     The first option of just using input with the spath command only gave me back the first field inside my string. And it was listed as {} field. I would really appreciate the help!
Hi, Our current requirement is to install 2 UF's of version 8.0.2 and 8.0.6 version in one single Windows VM Server. We installed first UF in a normal way by following the splunk doc and When we ... See more...
Hi, Our current requirement is to install 2 UF's of version 8.0.2 and 8.0.6 version in one single Windows VM Server. We installed first UF in a normal way by following the splunk doc and When we try to install 2nd UF, it is installing in the same directory and updating the already installed UF version. Is it possible to run two UF together in windows server? If so ,please let me know the steps and procedure to install. I found this link have anyone tried and did it work by following these steps?https://www.splunk.com/en_us/blog/tips-and-tricks/running-two-universal-forwarders-on-windows.html Thanks
Hello Splunkers, I want to optimize my splunk search. I have attached the screenshot of my search. From the raw data i am retreving the services name in or condition. I don't want to hardcore all t... See more...
Hello Splunkers, I want to optimize my splunk search. I have attached the screenshot of my search. From the raw data i am retreving the services name in or condition. I don't want to hardcore all the services name by using OR clause. Please give me some suggestions, how can i optimise the search with using OR clause.
Hi All I'm very new to Splunk can someone help me after how many days the data will transfer from hot bucket to warm bucket.  Note: default is 90 days that I know but I want proof which I need to... See more...
Hi All I'm very new to Splunk can someone help me after how many days the data will transfer from hot bucket to warm bucket.  Note: default is 90 days that I know but I want proof which I need to show so can someone guide me from where I could find this. Thank you in advance!!
Hi All, I need to understand how the standard deviation and moving average are calculated in AppDynamics for slow and very slow transaction thresholds? Please help with the examples.  Is it pos... See more...
Hi All, I need to understand how the standard deviation and moving average are calculated in AppDynamics for slow and very slow transaction thresholds? Please help with the examples.  Is it possible to do the manual calculation of standard deviation and moving average by using the reference values from Appdynamics for a few of the transactions? ^ Post edited by @Ryan.Paredez for formatting and clarity
Hi All, I have configured 99th percentile response time in the configuration tab under slow transaction thresholds for business transactions. The 99th percentile response time metric is not ava... See more...
Hi All, I have configured 99th percentile response time in the configuration tab under slow transaction thresholds for business transactions. The 99th percentile response time metric is not available/visible under the metric browser for the business transaction.  Does it need any extra steps after configuring the metric under slow transaction thresholds to reflect under Metric browser metrics? Regards Naveen D
  Splunk connect for-kubernetes and I have been tryingto forward the XML file logs to splunk with this splunk-connect-for-kubernetes repo. Can you please help on path of the XML  log file... See more...
  Splunk connect for-kubernetes and I have been tryingto forward the XML file logs to splunk with this splunk-connect-for-kubernetes repo. Can you please help on path of the XML  log files  and Confirigation of the container  XML log file # path of logfiles, default /var/log/containers/*.log # Configurations for container logs containers: ? # Path to root directory of container logs path: ? # Final volume destination of container log symlinks pathDest: ?
Hello there, I am working on VMware, I have two linux machines that I'm using as universal forwarders (ubuntu desktop and a linux server that are configured in the exact same way as forwarders). I ... See more...
Hello there, I am working on VMware, I have two linux machines that I'm using as universal forwarders (ubuntu desktop and a linux server that are configured in the exact same way as forwarders). I have another linux machine that I'm using as an indexer. The thing is that one of my forwarders (linux server) is forwarding correctly to the indexer, and i can see all the information i need in the index main. BUT the second forwarder logs are nowhere to be found. Although I can see the 2nd universal forwarder when I type index=_internal in the search bar but this index doesn't show any logs. Can someone help me please so I can see the logs of the second forwarders logs? Have a great day everyone! Abir
Hello, My issue is in my dashboard continues to load the old .js, in network calls I see it's call with a "version", like: /static/@ab..../handler.js When I browse the url deleting the version n... See more...
Hello, My issue is in my dashboard continues to load the old .js, in network calls I see it's call with a "version", like: /static/@ab..../handler.js When I browse the url deleting the version number I see my new script has been correctly installed, so I tried to call _bump, but even when the url is found, it loads an empty page (Without the button bump like in enterprise version), and my script continues to be wrongly loaded. Does anyone know if it's possible to bump on cloud? I tried by changing the build number in the app.conf But no use. Thankyou..
I have created a query similar to the below host=nftHost index=paymeNow source="\\\\epamjhost\Logs\*" | rex "(Message content+\s+:+\s+|\[Handling message+\s+:+\s+|\[Handling command of type Chec... See more...
I have created a query similar to the below host=nftHost index=paymeNow source="\\\\epamjhost\Logs\*" | rex "(Message content+\s+:+\s+|\[Handling message+\s+:+\s+|\[Handling command of type CheckCommand:+\s+)(?<json>\{.*)" | spath input=json | table _time, MessageTypeDesc, CurrentState, CaseId, TaskType, Attributes{}.AttributeName, Attributes{}.JsonValue, _raw The below json is obtained from the rex expression and spath is used to parse it. { "TaskId" : "1", "CurrentState" : "COMPLETED", "RequestedAction" : null, "User" : "NFTPAYME", "Attributes" : [{ "AttributeName" : "transactionId", "AttributeType" : "int", "JsonValue" : "4" }, { "AttributeName" : "Enabled", "AttributeType" : "boolean", "JsonValue" : "false" }, { "AttributeName" : "holdType", "AttributeType" : "string", "JsonValue" : "" }, { "AttributeName" : "isSettlement", "AttributeType" : "boolean", "JsonValue" : "false" }, { "AttributeName" : "isIntraday", "AttributeType" : "boolean", "JsonValue" : "false" }, { "AttributeName" : "isReleaseReady", "AttributeType" : "boolean", "JsonValue" : "false" }, { "AttributeName" : "isStat", "AttributeType" : "boolean", "JsonValue" : "false" }, { "AttributeName" : "StatusList", "AttributeType" : "string", "JsonValue" : "" }, ], "TaskType" : "Settle", "CaseId" : "1", }   Attributes contains an array of objects so my question is how to take the attributes and create a single string from the whole array? _time MessageTypeDesc CurrentState CaseId TaskType Attributes _raw           transactionId:4 Enabled:true holdType: isSettlement:false                        
I have Power-user access only. I have a Splunk query and I enabled an alert as a Notable Event. And I also received the notable events in ES --> Incident Review. But I am not getting the Search q... See more...
I have Power-user access only. I have a Splunk query and I enabled an alert as a Notable Event. And I also received the notable events in ES --> Incident Review. But I am not getting the Search query's result in my notable events. I am only getting the Alert name. Search results of the query are not received in the notable events. I want to get all the query's search results in the notable events. Please help.   Received Notable Event with no information   Actual Query's Search Result  
Server running Ubuntu 20.04.  Splunk Enterprise 8.2.4. Splunk Add-on for Java Management Extensions: 5.2.2 I have configured it to use a custom script. Example output of the script: splunk_user@... See more...
Server running Ubuntu 20.04.  Splunk Enterprise 8.2.4. Splunk Add-on for Java Management Extensions: 5.2.2 I have configured it to use a custom script. Example output of the script: splunk_user@server$ mocjmxpids.sh 3267,PROD_process1 2341258,PROD_process2 What will happen when searching the index, is that the field jvmDescription will be correctly filled in with the PID and process name starting from process2 and onwards. Process1 will not be found and will have the server name as a jvmDescription.  The data from the jmx will be read however, so the addon is attaching to the jvm, just not setting the correct name in jvmDescription. I honestly don't see what is going wrong.  Same shell is being used for the user, no weird whitespaces I can see? I have had other issues with the app though, in the sense that the GUI creates a jmx_servers.conf that is invalid.  When selecting customscript, it still creates a pidcommand setting in the file and violates the xml.  So I had to manually rectify the config there already.  This on multiple servers, so I'm wondering if there is more bugged in this version or not.
timechart [stats count|eval app=$A$|eval search=case(app=="*","span=30m count by B",app!="*","span=30m count by C")] is not work after upgrading splunk from 8.0.6 to 8.2.5. 
Hello, everyone! During search I got table like this time host user action result 12:24:06 host1 Alex action1 success 12:48:32 host2 Michael action2 fail... See more...
Hello, everyone! During search I got table like this time host user action result 12:24:06 host1 Alex action1 success 12:48:32 host2 Michael action2 fail   I have lookup users.csv, which looks like this host user host1 Alex host2 George   I want to compare my table with lookup and if host and user matches, return my table (time, host, user, action, result), thus on this example I want to get in results table: time host user action result 12:24:06 host1 Alex action1 success   (because in second line user not matches). Thank you in advance.
I have a few Threat Intelligence data that have Use-Cases applied to them but I'm trying to filter out blocked events, for example - say an asset was attempting to communicate with a malicious site a... See more...
I have a few Threat Intelligence data that have Use-Cases applied to them but I'm trying to filter out blocked events, for example - say an asset was attempting to communicate with a malicious site and it was blocked by the proxy or firewall. Do I tune the use-case search itself or modify the Threat Intelligence datamodel? All suggestions are appreciated. 
Hello Does Splunk in-memory technology work? Big data systems are using in-memory technology across Splunk platforms (data collection/transmission, storage, retrieval, etc.) I wonder if in-memor... See more...
Hello Does Splunk in-memory technology work? Big data systems are using in-memory technology across Splunk platforms (data collection/transmission, storage, retrieval, etc.) I wonder if in-memory technology is applied. If you have in-memory technology, is Splunk self-developed? Or I wonder if open-source was used. If you have any data to refer to, please share it with us.  
Hi All, May I know the different between the average response time next to tier icon in flow map and the Popup view average response time when I click on the tier icon? thanks
Hello! I'm trying to push alerts into Swimlane using the swimlane add-on. I've given full global permissions to the saved alert. There are 101 events to push but aren't getting pushed into Swimlane... See more...
Hello! I'm trying to push alerts into Swimlane using the swimlane add-on. I've given full global permissions to the saved alert. There are 101 events to push but aren't getting pushed into Swimlane. Please find logs below -  04-13-202210:50:57.393 +0200ERRORSearchScheduler - Error in 'sendalert' command:Alert script returned error code 1., search='sendalertpush_alerts_to_swimlaneresults_file="/opt/splunk/var/run/splunk/dispatch/scheduler_c3Jpa2FhbnRoLmFtcnV0aGEub3B0aXY_emZfY29ycmVsYXRpb25zX2ZpcmVleWU__RMD58b260abcef59878b_at_1649839800_2808/per_result_alert/tmp_16.csv.gz" results_link="https://mycompanyabcd.com/app/xxx_correlations_fireeye/search?q=%7Cloadjob%20scheduler_c3Jpa2FhbnRoLmFtcnV0aGEub3B0aXY_emZfY29ycmVsYXRpb25zX2ZpcmVleWU__RMD58b260abcef59878b_at_1649839800_2808%20%7C%20head%2017%20%7C%20tail%201&earliest=0&latest=now "' 04-13-202210:50:57.393 +0200WARN sendmodalert - action=push_alerts_to_swimlane- Alert action script returnederrorcode=1     Any advise appreciated. Thanks!