All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We had an EC2 instance become inaccessible via the AWS Session Manager. Root cause was the main volume filling-up with various splunkfowarder-x.x.x RPM files in /usr/bin/ Yesterday the filesystem... See more...
We had an EC2 instance become inaccessible via the AWS Session Manager. Root cause was the main volume filling-up with various splunkfowarder-x.x.x RPM files in /usr/bin/ Yesterday the filesystem was cleaned-up, but today there's another copy of that RPM in the /usr/bin/ directory. Does anyone know why is this happening ?
So I couldn't find anything in splunk community that answers my question about pushing an update to a lookup table. I manually updated the .csv file through the backend searchhead server. I deleted a... See more...
So I couldn't find anything in splunk community that answers my question about pushing an update to a lookup table. I manually updated the .csv file through the backend searchhead server. I deleted a line and replaced it with another hostname.    When i run the command:       |inputlookup dns_hosts.csv| stats count by host|eval count=0|join host type=outer [ search index="dns"|stats count by host]|fillnull|where count=0|fields host count       Im still getting the host that has a count of 0, the host that i removed in the csv file. My question is do i need to restart the searchhead to push that change? I didnt change any config files, just the lookupfile under the specific app directory's lookup file folder. I wasnt sure if splunk would automatically read the updated file after a certain amount of time, or if i needed to restart the server for it to take effect? And will that file replicate across all searchheads after I restart it?  Thank you for any guidance. 
Hi all, I have one question: I upgraded my Splunk deployment from 8.1.6 to 9.0.4. Deployment is: 3-nodes SH cluster, 3-nodes IDX cluster, 2 x HF, MC, SHC-D, CM, LM, DS. After upgrade I noticed on... See more...
Hi all, I have one question: I upgraded my Splunk deployment from 8.1.6 to 9.0.4. Deployment is: 3-nodes SH cluster, 3-nodes IDX cluster, 2 x HF, MC, SHC-D, CM, LM, DS. After upgrade I noticed one thing about queues on Monitoring Console. Before upgrade, all queues on all IDXs have 0% fill: But after upgrade, there is small fill (average about 5%, up to 10%) on Typing an Indexing queue: From my point of view it is strange, because nothing changed during upgrade - HW is the same, amount of ingested data is the same, kind of data is the same, no new log source etc. I search through documentation, but did not find anything relevant. So I would like to ask: what happens? Can it be ignored safely or there is really something wrong inside Splunk? Some config changes required because of some internal changes in Splunk? Could you share your experience with that, if you have one? Thank you in advance for any hint or glue. Best regards Lukas Mecir
Hello Splunkers, I would like to have to set an alert if a sudden high amount of events are received.  I have this base search: index=_internal source="*metrics.log" eps "group=per_source_thrup... See more...
Hello Splunkers, I would like to have to set an alert if a sudden high amount of events are received.  I have this base search: index=_internal source="*metrics.log" eps "group=per_source_thruput" NOT filetracker | eval events=eps*kb/kbps | timechart fixedrange=t span=1m limit=5 sum(events) by series So I have the number of events by a source per minute.  I like to trigger an alert if there are more than X events in 5 consecutive minutes from one source. Thanks for your hints in advance
Hey,  I would like to configure a webhook to send Meraki (Cisco) alarms to Splunk-On-Call.  There isn't a dedicated 3rd party integration for this, and the "REST" - generic isn't working with it.... See more...
Hey,  I would like to configure a webhook to send Meraki (Cisco) alarms to Splunk-On-Call.  There isn't a dedicated 3rd party integration for this, and the "REST" - generic isn't working with it.  Is there any way to add Meraki to the 3rd party integrations or is there any way to make it work?  Thanks in advance 
Hi, I have a query which gives a table of results. Now instead of exporting the table, I need to export the raw events itself. How can I do that? Instead of exporting 9980 values, I need to expor... See more...
Hi, I have a query which gives a table of results. Now instead of exporting the table, I need to export the raw events itself. How can I do that? Instead of exporting 9980 values, I need to export the whole 16882 events Any help would be appreciated!
I have created a bar chart with y axis of status count which are new and closed but its displaying like first closed bar block then new bar block. But now i have it to be first new and then closed. H... See more...
I have created a bar chart with y axis of status count which are new and closed but its displaying like first closed bar block then new bar block. But now i have it to be first new and then closed. How to do it?  
Hello Splunkers!! I have mentioned below query and from the below query I want a results as shown below in the excel. Please help me achieve that result. index=ABC sourcetype=ABC | eval date_yea... See more...
Hello Splunkers!! I have mentioned below query and from the below query I want a results as shown below in the excel. Please help me achieve that result. index=ABC sourcetype=ABC | eval date_year=strftime('_time',"%Y"), date_month=strftime('_time',"%B"), day_week=strftime('_time',"%A"), date_mday=strftime('_time',"%d"), date_hour=strftime('_time',"%H"), date_minute=strftime('_time',"%M") | stats count count(eval(ShuttleId)) as total by sourcetype | table sourcetype total | join max=0 type=outer sourcetype [| search index=ABC sourcetype=ABC | eval date_year=strftime('_time',"%Y"), date_month=strftime('_time',"%B"), day_week=strftime('_time',"%A"), date_mday=strftime('_time',"%d"), date_hour=strftime('_time',"%H"), date_minute=strftime('_time',"%M") | stats count by ShuttleId sourcetype _time] | table ShuttleId count total | eval condition =if(round((count/total),2) <=0, "GREEN", "RED") | eval Status =round((count/total),2) | eval Shuttle_percentage = round(((count/total)*100),2) | table ShuttleId Shuttle_percentage   _time ShuttleId Total_Orders Errors 2022-08-03T00:00:00.000+0000 Shuttle_001 69341 117 2022-08-04T00:00:00.000+0000 Shuttle_002 85640 51 2022-08-05T00:00:00.000+0000 Shuttle_003 72260 43 2022-08-06T00:00:00.000+0000 Shuttle_004 60291 22 2022-08-07T00:00:00.000+0000 Shuttle_005 0 0  
Hi all, Is it currently possible to somehow create a conditional macro expansion? For example, I have different list of hosts and wanted to expand base the macro argument. `myhosts(old)` would... See more...
Hi all, Is it currently possible to somehow create a conditional macro expansion? For example, I have different list of hosts and wanted to expand base the macro argument. `myhosts(old)` would expand to host=hostname1 OR host=hostname2 `myhosts(new)` would expand to host=hostname3 OR host=hostname4 I looked into different functions to somehow implement it but could not find a solution Thank you.
Hello, Good Day! I have mail logs and I need to check if sender appeared before in last 30 days. I have issues with write SPL with join or subsearch. index=* sourcetype=maillogs field tha... See more...
Hello, Good Day! I have mail logs and I need to check if sender appeared before in last 30 days. I have issues with write SPL with join or subsearch. index=* sourcetype=maillogs field that I want to compare is sender. If sender appeared in last 30 mails, then I have match and I should see those events in stats or table. I tried subsearch but after all attempts I ended with nothing. Could you please help me?
I need to create customize table in splunk. can you pls help how to implement that part and also attached the  sample image.it should be like that.  
We have noticed Azure nsg flow logs  are not consistently being ingested via Splunk Add-on for Microsoft Cloud Services.  there were some missing logs on the past time. For troubleshooting  we'd lik... See more...
We have noticed Azure nsg flow logs  are not consistently being ingested via Splunk Add-on for Microsoft Cloud Services.  there were some missing logs on the past time. For troubleshooting  we'd like to know Q1.When to change the  add-on setttings log level below to "DEBUG", (whether we need to disable input status on "Inputs" before or not) Q2. Does this  change above will apply to all logs we are currently ingesting  including internal logs?    
I have a spring boot application, deployed in AKS. I use Appynamics docker image to inject the agent into JVM via init-container. My company does not have a cluster license, and I'm unsure if or when... See more...
I have a spring boot application, deployed in AKS. I use Appynamics docker image to inject the agent into JVM via init-container. My company does not have a cluster license, and I'm unsure if or when that would be available. But the Machine Agent and Server Visibility licenses are there. So to capture server stats, I'm trying to configure the machine agent using the sidecar container approach. Below is the snippet of my deployment file apiVersion: apps/v1 kind: Deployment metadata: ... spec: replicas: 2 template: spec: initContainers: - name: appd-agent command: - cp - -ra - /opt/appdynamics/. - /opt/temp image: docker.io/appdynamics/java-agent:22.12.0 imagePullPolicy: IfNotPresent resources: limits: cpu: 200m memory: 100M requests: cpu: 100m memory: 50M volumeMounts: - mountPath: /opt/temp name: appd-agent-repo containers: - name: appd-analytics-agent envFrom: - configMapRef: name: controller-info env: - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY valueFrom: secretKeyRef: name: my-secrets key: APP_D_AGENT_SECRET image: docker.io/appdynamics/machine-agent-analytics:23.1.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9090 protocol: TCP readinessProbe: exec: command: - touch - /tmp/healthy livenessProbe: exec: command: - touch - /tmp/healthy resources: limits: cpu: 200m memory: 900M requests: cpu: 100m memory: 600M - name: my-non-prod-app image: xxx.xxx imagePullPolicy: Always resources: requests: memory: "2Gi" cpu: "1" limits: memory: "4Gi" cpu: "2" env: - name: JDK_JAVA_OPTIONS value: "-javaagent:/opt/temp/javaagent.jar -Djava.net.useSystemProxies=true" - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY valueFrom: secretKeyRef: name: my-secrets key: APP_D_AGENT_SECRET envFrom: - configMapRef: name: my-configmap ports: - containerPort: 8080 readinessProbe: ... livenessProbe: ... volumeMounts: - mountPath: /opt/temp name: appd-agent-repo volumes: - name: appd-agent-repo emptyDir: { } --- apiVersion: v1 kind: ConfigMap data: APPDYNAMICS_AGENT_ACCOUNT_NAME: "my-non-prod-account" APPDYNAMICS_AGENT_GLOBAL_ACCOUNT_NAME: "my-non-prod-account_#####" APPDYNAMICS_AGENT_APPLICATION_NAME: "MyApp Dev" APPDYNAMICS_MACHINE_HIERARCHY_PATH: "MyApp" APPDYNAMICS_CONTROLLER_HOST_NAME: "my-non-prod-account.saas.appdynamics.com" APPDYNAMICS_CONTROLLER_PORT: "443" APPDYNAMICS_CONTROLLER_SSL_ENABLED: "true" APPDYNAMICS_SIM_ENABLED: "true" EVENT_ENDPOINT: "https://analytics.api.appdynamics.com:443" metadata: name: controller-info  I can see that the machine-agent container gets registered successfully and I can see the machines under Servers. But here I only see the machine agent process, and no other processes Is it capturing the details of its own container only or for the whole pod? And If  I go to the Tiers & Nodes of my application: There are no details under Servers there. Also, the machine agent status is "Agent not installed" under the Agents tab for each node. So it looks like the machine agent is not able to detect the nodes of my application or not able to attach to them.
I want X axis to be follow the same way as legend order. 
Hello, CPU usage is showing very high more than 92% in monitor console. How do we know which resources causing those issues. If this is due to real time search or executing search queries (or runni... See more...
Hello, CPU usage is showing very high more than 92% in monitor console. How do we know which resources causing those issues. If this is due to real time search or executing search queries (or running schedule search/reports), how do we know which users (or searches) are causing that issues. We have a several number of users and remotely using/accessing to SPLUNK SH/Reports and perform search queries and schedule search/reports based on their requirements. Any recommendation will be highly appreciated. Thank you!  
Hi,  Might be a stupid question, but i am new to Splunk. We are trying to upgrade Splunk ES on the Search head and I need to copy the .spl file on the search head so that I can upgrade the app. ... See more...
Hi,  Might be a stupid question, but i am new to Splunk. We are trying to upgrade Splunk ES on the Search head and I need to copy the .spl file on the search head so that I can upgrade the app. A bit about the environment All our search heads are on Linux hosted in Azure, the access to these hosts is managed through Cloudflare. We access these Linux hosts using AzureCLI from PowerShell.  I have downloaded the .spl file on my local Windows machine. I tried SCP, WINSCP (GUI) but unable to connect the search heads and copy over the file to the Linux box from my Windows host. Does anyone know, how to achieve this?  Your help in this will be highly appreciated.
Individually these searches work: ```#1 sum all values in field repeat_count in all threat logs that are M,H,C severity``` index=FW host=InternetFW sourcetype="fw:threat" severity IN (medium, hig... See more...
Individually these searches work: ```#1 sum all values in field repeat_count in all threat logs that are M,H,C severity``` index=FW host=InternetFW sourcetype="fw:threat" severity IN (medium, high, critical) | stats sum(repeat_count) as TotalCount ```#2 sum all repeat_count vailues for the top 10 signatures ``` index=FW host=InternetFW sourcetype="fw:threat" severity IN (medium, high, critical) | stats sum(repeat_count) as Top_10_Threats_per_Day by signature | sort 10 -Top_10_Threats_per_Day | stats sum(Top_10_Threats_per_Day) as Top-10 Trying to get the 2 values into a timechart |timechart span=1d values(TotalCount) as " Total" , values(Top-10) as "Total of top 10"  Tried subsearch {search 1[search 2|fields Top-10]}, Tried multsearch.
On the topic of managing applications from Splunkbase, I have a few questions. Take the TA-Exchange-Mailbox as an example: Firstly, the installation documentation for this application seems incorrec... See more...
On the topic of managing applications from Splunkbase, I have a few questions. Take the TA-Exchange-Mailbox as an example: Firstly, the installation documentation for this application seems incorrect. While it references the application being installed at the Collection tier/input phase, it doesn't mention anything about the Search Tier/Phase, despite the existence of stanzas in props/transforms. Is the installation location implied? Here are the links to the documentation for reference: https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Configurationparametersandthedatapipeline https://docs.splunk.com/Documentation/AddOns/released/MSExchange/About#Splunk_Add-on_for_Microsoft_Exchange_Component_Installation_Locations Secondly, how should I manage the application at the various tiers of my deployment? The TA-Exchange-Mailbox application has inputs, props, transforms, and so on. Should I delete inputs.conf before putting the application on my Search Head? Do I delete props or transforms before adding to my Universal Forwarders? If it doesn't matter because Splunk will only use what is appropriate at various phases, then I can manage a single app with all of the configs and deploy everywhere, which seems to be the least overhead. Thirdly, if I need to modularize the application and apply config files at their respective tiers, can I rename it? This way, I would have two separate applications to better manage changes in source control: TA-Exchange-Mailbox_inputs TA-Exchange-Mailbox_props (or parsing or whatever) I would appreciate any advice or best practices on managing applications from Splunkbase. Thank you!
Hi, Here is my Data in 2 logs having 3 fields: Log1: Books Bought AccountName={} , BookIds={} (here BookId can contains multiple bookIds) eg:  Books Bought AccountName={ABC} , BookIds={bo... See more...
Hi, Here is my Data in 2 logs having 3 fields: Log1: Books Bought AccountName={} , BookIds={} (here BookId can contains multiple bookIds) eg:  Books Bought AccountName={ABC} , BookIds={book1, book2, book3} Books Bought AccountName={ABC} , BookIds={book1} Books Bought AccountName={DEF} , BookIds={book1, book2} Books Bought AccountName={EPF} , BookIds={book1, book3} Books Bought AccountName={EPF} , BookIds={book1}   Log2: Books Sold AccountName={} , BookId={} (here BookId contains only one bookId)   Eg. Books Sold AccountName={ABC} , BookId={book2} Books Sold AccountName={EPF} , BookId={book1} Books Sold AccountName={EPF} , BookId={book1}   Result I want: AccountName Total Books bookName bought sold ABC 4 book1 book2 book3 2 1 1 0 1 0 DEF 2 book1 book2 1 1 0 0 EPF 3 book1 book3 2 1 2 0             can anyone please help me. Tried but not getting the result.
Installed Universal forwarder and no inputs are added yet, still gradual memory growth. Why there is constant memory growth with Universal Forwarder? More importantly in the K8 cluster setting, eve... See more...
Installed Universal forwarder and no inputs are added yet, still gradual memory growth. Why there is constant memory growth with Universal Forwarder? More importantly in the K8 cluster setting, every extra MB memory usage matters. Applicable for all splunk instances except Indexers/Heavy forwarders.