All Topics

Top

All Topics

We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i ide... See more...
We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i identify the index name of that data or if i cant identify the index name how to restore it to a random index.
Hello Everyone, I have following splunk query, which I am trying to build for dropdown in dashboard. Basically 2 dropdowns, the 1st dropdown has got static value which is index names:  index_1 , ind... See more...
Hello Everyone, I have following splunk query, which I am trying to build for dropdown in dashboard. Basically 2 dropdowns, the 1st dropdown has got static value which is index names:  index_1 , index_2 , index_3 Based on the selected index,  I am trying to run the splunk query:   index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | search hostname= hostname_pattern   the search always return empty. However if I run the direct query for index_1 or index_2 with its relevant hostname, it works and returns me results   index="index_1" | search hostname= "*-hostname_1"    For the sake of checking if my condition is working or not, I fed the output of eval case into table. And checked by passing relevant indexes (index_1 or index_2)   index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | stats count by hostname_pattern | table hostname_pattern | sort hostname_pattern   returns *-hostname_1 Not sure how do we pass the hostname value based on selected index for search. Highly appreciate your help.
Hi Guys I have issue for the newly setup HF and UF. The windows UF’s logs are reaching the Indexers while the Linux UF are not. Communication is ok between LiNux UF and HF as observed using tcpdum... See more...
Hi Guys I have issue for the newly setup HF and UF. The windows UF’s logs are reaching the Indexers while the Linux UF are not. Communication is ok between LiNux UF and HF as observed using tcpdump. The linux UF is sending traffics and HF received and process it. can you help what needs to check on UF or HF?
Hello, I am following this tutorial to create a Splunk app using React on macOS Sonoma: https://splunkui.splunk.com/Toolkits/SUIT/AppTutorial However, I am not able to get it to work. The 'start' v... See more...
Hello, I am following this tutorial to create a Splunk app using React on macOS Sonoma: https://splunkui.splunk.com/Toolkits/SUIT/AppTutorial However, I am not able to get it to work. The 'start' view is simply not added to the app views on Splunk, even though they are there in the files in my app. I wasn't even able to launch the app before I set it to 'Visible' by going to 'Manage Apps' and editing its properties. It should have been visible because it is set as such under my app.conf. But after I launched it, I was redirected to the search page (image below). If I go to the URL http://localhost:8000/en-US/app/my-splunk-app/start, I get the 'Page not found' error page. Could someone please help me with this?
My query returns these events, i need to compute the total time A was in this state and total time B was in this state. My thought is to subtract the TImestamp of the first A from the most recent A a... See more...
My query returns these events, i need to compute the total time A was in this state and total time B was in this state. My thought is to subtract the TImestamp of the first A from the most recent A and so on for B but cant figure out the right way to do this?   Timestamp Job Date LoggedTime Ready 1728092168.000000 A 10/4/2024 21:36:03 1 1728092163.000000 A 10/4/2024 21:35:50 1 1728092150.000000 A 10/4/2024 21:35:27 1 1728092127.000000 A 10/4/2024 21:35:16 1 1728090335.000000 B 10/4/2024 21:05:15 2 1728090315.000000 B 10/4/2024 21:05:03 2 1728090303.000000 B 10/4/2024 21:04:53 2 1728090293.000000 B 10/4/2024 21:04:31 2
Currently, InfraViz doesn't let you deploy Custom extensions. If you wish to deploy custom extensions on Kubernetes using machine agents then this article is for you. This can be done in 2 ways: ... See more...
Currently, InfraViz doesn't let you deploy Custom extensions. If you wish to deploy custom extensions on Kubernetes using machine agents then this article is for you. This can be done in 2 ways: Creating a new Machine Agent Image Creating a new yaml file for Machine Agent Creating a new Machine Agent Image Now if you wish to use this method, which is modifying the Machine Agent image you need to take a step back and ask yourself: Do you need the extension on all nodes? If not, then if you deploy InfraViz in default by just updating the Image with extension then on the node where it works, everything will be fine but on others, you will have logs filled with ERROR/WARN messages which can potentially lead to Machine agent collector script timing out. Do you need a Machine Agent on all nodes? If not, then we are okay. We can use NodeSelector property of InfraViz and simply deploy this new Image using InfraViz on the specific node. In any case, the Dockerfile will look like below: FROM ubuntu:latest # Install curl and unzip RUN apt-get update && apt-get install -y curl unzip procps # Add and unzip the Machine Agent bundle ADD machineagent-bundle-64bit-linux-23.7.0.3689.zip /tmp/machineagent.zip RUN unzip /tmp/machineagent.zip -d /opt/appdynamics && rm /tmp/machineagent.zip # Set environment variable for Machine Agent home ENV MACHINE_AGENT_HOME /opt/appdynamics # Add AWS API Gateway Monitor and start-appdynamics script ADD create-open-file-extension-folder /opt/appdynamics/monitors ADD start-appdynamics ${MACHINE_AGENT_HOME} # Make start-appdynamics script executable RUN chmod 744 ${MACHINE_AGENT_HOME}/start-appdynamics # Set Java Home environment variable ENV JAVA_HOME /opt/appdynamics/jre/bin/java # Run AppDynamics Machine Agent CMD ["/opt/appdynamics/start-appdynamics"] NOTE: In the same directory as Dockerfile You need to have the appdynamics zip in your local. In my case, I have machineagent-bundle-64bit-linux-23.7.0.3689.zip in my local create-open-file-extension-folder is the extension folder which I am moving to /opt/appdynamics/monitors, This has my script.sh and monitor.xml file Remember for extensions, the Machine agent looks for folders and files in monitors directory. start-appdynamics.sh script. This is the content of start-appdynamics.sh script. You will need to edit it and add your Controller configuration. MA_PROPERTIES="-Dappdynamics.controller.hostName=xxx.saas.appdynamics.com" MA_PROPERTIES+=" -Dappdynamics.controller.port=443" MA_PROPERTIES+=" -Dappdynamics.agent.accountName=xxxx" MA_PROPERTIES+=" -Dappdynamics.agent.accountAccessKey=xx" MA_PROPERTIES+=" -Dappdynamics.controller.ssl.enabled=true" MA_PROPERTIES+=" -Dappdynamics.sim.enabled=true" MA_PROPERTIES+=" -Dappdynamics.docker.enabled=false" MA_PROPERTIES+=" -Dappdynamics.docker.container.containerIdAsHostId.enabled=true" # Start Machine Agent ${MACHINE_AGENT_HOME}/jre/bin/java ${MA_PROPERTIES} -jar ${MACHINE_AGENT_HOME}/machineagent.jar Great. Now all you need to do is build the Image and push it to your repository. Once done, in the InfraViz section, Update the Image section of InfraViz with this new Image Creating a new yaml file for Machine Agent Now the second option is using deployment and InfraViz together. I have created a infraviz-deployment.yaml file. This is a deployment that I am deploying on a specific node. apiVersion: apps/v1 kind: Deployment metadata: name: machine-agent-extension labels: app: machine-agent-extension spec: replicas: 1 selector: matchLabels: app: machine-agent-extension template: metadata: labels: app: machine-agent-extension spec: initContainers: - name: create-open-file-extension-folder image: busybox command: ['sh', '-c', 'mkdir -p /opt/appdynamics/monitors/open-file-extension && cp /tmp/config/* /opt/appdynamics/monitors/open-file-extension && chmod +x /opt/appdynamics/monitors/open-file-extension/script.sh'] volumeMounts: - name: config-volume mountPath: /tmp/config # Mount ConfigMap here temporarily - name: open-file-extension mountPath: /opt/appdynamics/monitors/open-file-extension # Target directory in emptyDir containers: - name: machine-agent-extension image: appdynamics/machine-agent:latest ports: - containerPort: 9090 env: - name: APPDYNAMICS_CONTROLLER_HOST_NAME value: "xxxx.saas.appdynamics.com" - name: APPDYNAMICS_CONTROLLER_PORT value: "443" - name: APPDYNAMICS_AGENT_ACCOUNT_NAME value: "xxx" - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY value: "xxx" - name: APPDYNAMICS_SIM_ENABLED value: "true" - name: APPDYNAMICS_CONTROLLER_SSL_ENABLED value: "true" volumeMounts: - name: open-file-extension mountPath: /opt/appdynamics/monitors/open-file-extension volumes: - name: config-volume configMap: name: open-file-extension-config # ConfigMap holding script.sh and monitor.xml - name: open-file-extension emptyDir: {} # EmptyDir to allow read/write nodeSelector: kubernetes.io/hostname: "ip-222-222-222-222.us-west-2.compute.internal" --- apiVersion: v1 kind: ConfigMap metadata: name: open-file-extension-config namespace: default data: script.sh: | #!/bin/bash # Get the current open files limit for the process open_files_limit=$(ulimit -n) ##Commentlineforcheck # Output the open files limit to stdout echo "name=Custom Metrics|OpenFilesLimitMonitor|OpenFilesLimit,value=$open_files_limit" monitor.xml: | <monitor> <name>OpenFile</name> <type>managed</type> <enabled>true</enabled> <enable-override os-type="linux">true</enable-override> <description>OpenFile</description> <monitor-configuration></monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments></task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor> Right now I am only monitoring one node though, so we will use InfraViz to monitor other nodes now. Taint the nodes that I don't want InfraViz to run now which is going to be above one ^ kubectl taint node ip-222-222-222-222.us-west-2.compute.internal machine-agent=false:NoSchedule Now, I can deploy InfraViz.yaml normally and it won't be deployed on ip-222 node. Now you have MA running on all of your nodes, one with extension, and the rest normally Please reach out to Support if you have any questions.
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = ... See more...
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = printlogs When I try to add the folder path in Splunk through the add data feature: "add data" - "Monitor" -"Files & Directories" I get to submit and then get an error: "Parameter name:  Path must be absolute". So I added the following stanza to my inputs.conf file in the systems/local/folder: [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\*.log] index = printlogs host = cpn-prt01 disabled = 0 renderXml = 1 I created a second stanza with a index = printlogs2 with respective index to monitor the following path to see if I can pull straight from the path and ignore the file type inside. [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\] I do see the full path to both in the "Files & Director" list under the Data Inputs.  However, I am not getting any event counts when I look at the respective indexes seen in the Splunk Indexes page.   I did a Splunk refresh and even restarted the Splunk server with now luck.   Thought maybe someone has run into similar issue or has a possible solution.   Thanks in advance.
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalP... See more...
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalPlanned   Both queries are working well and giving expected results.  When I combine them using sub search, I am getting error:   index=abc status=error | stats count AS FailCount [ search index=abc status=planning | stats count AS TotalPlanned | table TotalPlanned ] | eval percentageFailed=(FailCount/TotalPlanned)*100   Error message:   Error in 'stats' command: The argument '(( TotalPlanned=761 )) is invalid'   Note: The count 761 is a valid count for TotalPlanned, so it did perform that calculation. 
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of f... See more...
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of field1 = "field1=value1" Does any one knows what i need to do to help the user get the same result as mine 
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. ... See more...
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. [afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk start --accept-license --answer-yes Error calling execve(): Permission denied Error launching systemctl show command: Permission denied This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a.deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y Can't run "btool server list clustering --no-log": Permission denied [afmpcc-prabdev@sgmtihfsv001 splunk]$[afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk btool server list clustering --no-log execve: Permission denied while running command /mnt/splunk/splunk/bin/btool [afmpcc-prabdev@sgmtihfsv001 splunk]$
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do... See more...
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do with this is get a timechart with the status at each time point, however, I have an issue of the "blank" time events being filled in with zeros, whereas I need the last valid value instead.  My naive query is: index="jsm_issues" | sort -_time | dedup _time key | timechart count(fields.status.name) by fields.status.name Which gives me:   How can I query to get these zeros filled in with the last valid count ticket statuses? Some things I've tried with no success: Some filldown kludges usenull=f on the timechart A million other suggestions on this forum that usually involve a simpler query     Any suggestions?  Thanks!
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= b... See more...
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= because: Could not get info for non-existent user="" 10-04-2024 16:55:01.935 +0300 ERROR UserManagerPro [5280 indexerPipe_0] - user="" had no roles   I checked that this couple first appeared 5 days ago, but this fact can't help me because I don't remember what I changed in the exact day. I also tried to find some helpful "nearby" events that can help me to understand the root case, but didn't observe anything interesting. Which ways do I have to investigate this case? Maybe I can "rise" log policy to DEBUG lvl? If I can, what should I change and where? Little more information: I have searchhead cluster with LDAP authorization And also indexer cluster only with local users
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Py... See more...
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Python helper functions https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/PythonHelperFunctions 
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your source... See more...
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your sourcetype here> | stats sum(b) | eval GB = round('sum(b)'/1073741824,2) | fields GB The issue is  I have a list of 1200 sourcetypes . please suggest me how can I adjust the entire list into this query   
Hello community, I need to set up a dashboard that tracks the status of an alert from Splunk OnCall. An alert can have 2 to 3 statuses and I would like to retrieve the _time of each step and keep it... See more...
Hello community, I need to set up a dashboard that tracks the status of an alert from Splunk OnCall. An alert can have 2 to 3 statuses and I would like to retrieve the _time of each step and keep it in memory for each state (to make duration calculations in particular) : I manage to retrieve the _time for each state in a dedicated field but I cannot transfer this value to the other states:   index=oncall_prod originOnCall="Prod" incidentNumber=497764 | sort _time desc | rex field=entityDisplayName "(?<Priorité>..) - (?<Titre>.*)" | eval startAlert = if(alertType == "CRITICAL", _time, "") | eval startAlert = strftime(startAlert,"%Y-%m-%d %H:%M:%S ") | eval ackAlert = if(alertType == "ACKNOWLEDGEMENT", _time, "") | eval ackAlert = strftime(ackAlert,"%Y-%m-%d %H:%M:%S ") | eval endAlert = if(alertType == "RECOVERY", _time, "") | eval endAlert = strftime(endAlert,"%Y-%m-%d %H:%M:%S ") | table _time, incidentNumber, alertType, Priorité, Titre, startAlert, ackAlert, endAlert, ticket_EV   Do you have any idea how to do this? I searched the forum but couldn't find a solution that matched my problem. Sincerely, Rajaion
Hi. We are starting to use Splunk Infrastructure monitoring, and want to deploy the Otel-Collector using our existing Splunk infrastructure (Deployment Server). We would really like to send the Ote... See more...
Hi. We are starting to use Splunk Infrastructure monitoring, and want to deploy the Otel-Collector using our existing Splunk infrastructure (Deployment Server). We would really like to send the Otel data to IM using a HTTP_PROXY, but do not want to change the dataflow for the entire server, so only a local HTTP_PROXY for the otel-collector. As I read the documentation you need to set environment variables for the entire server and not just the otel-collector process. Has anyone any experience using HTTP_PROXY and Otel-Collector?   Kind regards las
I've seen someone use this traffic search function but can't find it myself: How can I access this traffic search function? I know that I can run a search to get the same result but would like ... See more...
I've seen someone use this traffic search function but can't find it myself: How can I access this traffic search function? I know that I can run a search to get the same result but would like to be able to use this handalso.
I have a lookup table that we update on daily basis with two fields that are relevant here, NAME and ID.  NAME ID Toronto 765 Toronto 1157 Toronto 36   I need to pull data from ... See more...
I have a lookup table that we update on daily basis with two fields that are relevant here, NAME and ID.  NAME ID Toronto 765 Toronto 1157 Toronto 36   I need to pull data from an index and filter for these three IDs. Normally I would just do  <base search> | lookup lookup_table ID OUTPUT NAME | where NAME = "Toronto" This works, but the search takes forever since the base search is pulling records from everywhere, and filtering afterward.  I'm wondering if it's possible to do something like this (psuedo code search incoming) index=<index> ID IN ( |[inputlookup lookup_table where NAME = "Toronto"]) Basically, I'm trying to save time by not pulling all the records at the beginning and instead filter on a dynamic value that I have to grab from a lookup table. 
I am testing out the Splunk Operator Helm chart to deploy a C3 architecture Splunk instance. At the moment everything deploys without errors, My cluster manager will pull and install apps via the App... See more...
I am testing out the Splunk Operator Helm chart to deploy a C3 architecture Splunk instance. At the moment everything deploys without errors, My cluster manager will pull and install apps via the AppFramework config, and SmartStore is receiving data from the indexer cluster. However, after creating ingress objects for each Splunk instance in the deployment (LM, CM, MC, SHC, IDXC) I have been able to successfully log into every WebGUI except for the indexer cluster. The behavior I am experience is basically like getting kicked out of the GUI the second I type the username and password then hit enter. The web page refreshes and I am back at the log in screen. I double checked that the Kubernetes secret containing the admin password is the same for all of the Splunk instances, and also intentionally typed in a bad password and got a login failed message instead of the screen refresh I get when entering the correct password. I am not really sure how to go about troubleshooting this. I searched through the _internal index but didn't come up with a smoking gun.
We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so i... See more...
We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so instead of each event taking up multiple lines it is only takes up one. Which then allows Splunk to show the data in the correct JSON syntax highlighting. How do I turn this off?