All Topics

Top

All Topics

Hello, Good Day! I have mail logs and I need to check if sender appeared before in last 30 days. I have issues with write SPL with join or subsearch. index=* sourcetype=maillogs field tha... See more...
Hello, Good Day! I have mail logs and I need to check if sender appeared before in last 30 days. I have issues with write SPL with join or subsearch. index=* sourcetype=maillogs field that I want to compare is sender. If sender appeared in last 30 mails, then I have match and I should see those events in stats or table. I tried subsearch but after all attempts I ended with nothing. Could you please help me?
I need to create customize table in splunk. can you pls help how to implement that part and also attached the  sample image.it should be like that.  
We have noticed Azure nsg flow logs  are not consistently being ingested via Splunk Add-on for Microsoft Cloud Services.  there were some missing logs on the past time. For troubleshooting  we'd lik... See more...
We have noticed Azure nsg flow logs  are not consistently being ingested via Splunk Add-on for Microsoft Cloud Services.  there were some missing logs on the past time. For troubleshooting  we'd like to know Q1.When to change the  add-on setttings log level below to "DEBUG", (whether we need to disable input status on "Inputs" before or not) Q2. Does this  change above will apply to all logs we are currently ingesting  including internal logs?    
I have a spring boot application, deployed in AKS. I use Appynamics docker image to inject the agent into JVM via init-container. My company does not have a cluster license, and I'm unsure if or when... See more...
I have a spring boot application, deployed in AKS. I use Appynamics docker image to inject the agent into JVM via init-container. My company does not have a cluster license, and I'm unsure if or when that would be available. But the Machine Agent and Server Visibility licenses are there. So to capture server stats, I'm trying to configure the machine agent using the sidecar container approach. Below is the snippet of my deployment file apiVersion: apps/v1 kind: Deployment metadata: ... spec: replicas: 2 template: spec: initContainers: - name: appd-agent command: - cp - -ra - /opt/appdynamics/. - /opt/temp image: docker.io/appdynamics/java-agent:22.12.0 imagePullPolicy: IfNotPresent resources: limits: cpu: 200m memory: 100M requests: cpu: 100m memory: 50M volumeMounts: - mountPath: /opt/temp name: appd-agent-repo containers: - name: appd-analytics-agent envFrom: - configMapRef: name: controller-info env: - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY valueFrom: secretKeyRef: name: my-secrets key: APP_D_AGENT_SECRET image: docker.io/appdynamics/machine-agent-analytics:23.1.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9090 protocol: TCP readinessProbe: exec: command: - touch - /tmp/healthy livenessProbe: exec: command: - touch - /tmp/healthy resources: limits: cpu: 200m memory: 900M requests: cpu: 100m memory: 600M - name: my-non-prod-app image: xxx.xxx imagePullPolicy: Always resources: requests: memory: "2Gi" cpu: "1" limits: memory: "4Gi" cpu: "2" env: - name: JDK_JAVA_OPTIONS value: "-javaagent:/opt/temp/javaagent.jar -Djava.net.useSystemProxies=true" - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY valueFrom: secretKeyRef: name: my-secrets key: APP_D_AGENT_SECRET envFrom: - configMapRef: name: my-configmap ports: - containerPort: 8080 readinessProbe: ... livenessProbe: ... volumeMounts: - mountPath: /opt/temp name: appd-agent-repo volumes: - name: appd-agent-repo emptyDir: { } --- apiVersion: v1 kind: ConfigMap data: APPDYNAMICS_AGENT_ACCOUNT_NAME: "my-non-prod-account" APPDYNAMICS_AGENT_GLOBAL_ACCOUNT_NAME: "my-non-prod-account_#####" APPDYNAMICS_AGENT_APPLICATION_NAME: "MyApp Dev" APPDYNAMICS_MACHINE_HIERARCHY_PATH: "MyApp" APPDYNAMICS_CONTROLLER_HOST_NAME: "my-non-prod-account.saas.appdynamics.com" APPDYNAMICS_CONTROLLER_PORT: "443" APPDYNAMICS_CONTROLLER_SSL_ENABLED: "true" APPDYNAMICS_SIM_ENABLED: "true" EVENT_ENDPOINT: "https://analytics.api.appdynamics.com:443" metadata: name: controller-info  I can see that the machine-agent container gets registered successfully and I can see the machines under Servers. But here I only see the machine agent process, and no other processes Is it capturing the details of its own container only or for the whole pod? And If  I go to the Tiers & Nodes of my application: There are no details under Servers there. Also, the machine agent status is "Agent not installed" under the Agents tab for each node. So it looks like the machine agent is not able to detect the nodes of my application or not able to attach to them.
I want X axis to be follow the same way as legend order. 
Hello, CPU usage is showing very high more than 92% in monitor console. How do we know which resources causing those issues. If this is due to real time search or executing search queries (or runni... See more...
Hello, CPU usage is showing very high more than 92% in monitor console. How do we know which resources causing those issues. If this is due to real time search or executing search queries (or running schedule search/reports), how do we know which users (or searches) are causing that issues. We have a several number of users and remotely using/accessing to SPLUNK SH/Reports and perform search queries and schedule search/reports based on their requirements. Any recommendation will be highly appreciated. Thank you!  
Hi,  Might be a stupid question, but i am new to Splunk. We are trying to upgrade Splunk ES on the Search head and I need to copy the .spl file on the search head so that I can upgrade the app. ... See more...
Hi,  Might be a stupid question, but i am new to Splunk. We are trying to upgrade Splunk ES on the Search head and I need to copy the .spl file on the search head so that I can upgrade the app. A bit about the environment All our search heads are on Linux hosted in Azure, the access to these hosts is managed through Cloudflare. We access these Linux hosts using AzureCLI from PowerShell.  I have downloaded the .spl file on my local Windows machine. I tried SCP, WINSCP (GUI) but unable to connect the search heads and copy over the file to the Linux box from my Windows host. Does anyone know, how to achieve this?  Your help in this will be highly appreciated.
Individually these searches work: ```#1 sum all values in field repeat_count in all threat logs that are M,H,C severity``` index=FW host=InternetFW sourcetype="fw:threat" severity IN (medium, hig... See more...
Individually these searches work: ```#1 sum all values in field repeat_count in all threat logs that are M,H,C severity``` index=FW host=InternetFW sourcetype="fw:threat" severity IN (medium, high, critical) | stats sum(repeat_count) as TotalCount ```#2 sum all repeat_count vailues for the top 10 signatures ``` index=FW host=InternetFW sourcetype="fw:threat" severity IN (medium, high, critical) | stats sum(repeat_count) as Top_10_Threats_per_Day by signature | sort 10 -Top_10_Threats_per_Day | stats sum(Top_10_Threats_per_Day) as Top-10 Trying to get the 2 values into a timechart |timechart span=1d values(TotalCount) as " Total" , values(Top-10) as "Total of top 10"  Tried subsearch {search 1[search 2|fields Top-10]}, Tried multsearch.
On the topic of managing applications from Splunkbase, I have a few questions. Take the TA-Exchange-Mailbox as an example: Firstly, the installation documentation for this application seems incorrec... See more...
On the topic of managing applications from Splunkbase, I have a few questions. Take the TA-Exchange-Mailbox as an example: Firstly, the installation documentation for this application seems incorrect. While it references the application being installed at the Collection tier/input phase, it doesn't mention anything about the Search Tier/Phase, despite the existence of stanzas in props/transforms. Is the installation location implied? Here are the links to the documentation for reference: https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Configurationparametersandthedatapipeline https://docs.splunk.com/Documentation/AddOns/released/MSExchange/About#Splunk_Add-on_for_Microsoft_Exchange_Component_Installation_Locations Secondly, how should I manage the application at the various tiers of my deployment? The TA-Exchange-Mailbox application has inputs, props, transforms, and so on. Should I delete inputs.conf before putting the application on my Search Head? Do I delete props or transforms before adding to my Universal Forwarders? If it doesn't matter because Splunk will only use what is appropriate at various phases, then I can manage a single app with all of the configs and deploy everywhere, which seems to be the least overhead. Thirdly, if I need to modularize the application and apply config files at their respective tiers, can I rename it? This way, I would have two separate applications to better manage changes in source control: TA-Exchange-Mailbox_inputs TA-Exchange-Mailbox_props (or parsing or whatever) I would appreciate any advice or best practices on managing applications from Splunkbase. Thank you!
Hi, Here is my Data in 2 logs having 3 fields: Log1: Books Bought AccountName={} , BookIds={} (here BookId can contains multiple bookIds) eg:  Books Bought AccountName={ABC} , BookIds={bo... See more...
Hi, Here is my Data in 2 logs having 3 fields: Log1: Books Bought AccountName={} , BookIds={} (here BookId can contains multiple bookIds) eg:  Books Bought AccountName={ABC} , BookIds={book1, book2, book3} Books Bought AccountName={ABC} , BookIds={book1} Books Bought AccountName={DEF} , BookIds={book1, book2} Books Bought AccountName={EPF} , BookIds={book1, book3} Books Bought AccountName={EPF} , BookIds={book1}   Log2: Books Sold AccountName={} , BookId={} (here BookId contains only one bookId)   Eg. Books Sold AccountName={ABC} , BookId={book2} Books Sold AccountName={EPF} , BookId={book1} Books Sold AccountName={EPF} , BookId={book1}   Result I want: AccountName Total Books bookName bought sold ABC 4 book1 book2 book3 2 1 1 0 1 0 DEF 2 book1 book2 1 1 0 0 EPF 3 book1 book3 2 1 2 0             can anyone please help me. Tried but not getting the result.
Installed Universal forwarder and no inputs are added yet, still gradual memory growth. Why there is constant memory growth with Universal Forwarder? More importantly in the K8 cluster setting, eve... See more...
Installed Universal forwarder and no inputs are added yet, still gradual memory growth. Why there is constant memory growth with Universal Forwarder? More importantly in the K8 cluster setting, every extra MB memory usage matters. Applicable for all splunk instances except Indexers/Heavy forwarders.
I'm having difficulty ingesting log data from flat files into Splunk. I'm monitoring six different directories, each containing 100-1000 log files, some of which are historical and will require less ... See more...
I'm having difficulty ingesting log data from flat files into Splunk. I'm monitoring six different directories, each containing 100-1000 log files, some of which are historical and will require less ingestion in the future. However, I'm seeing inconsistent results and not all logs are being ingested properly. Here's an example of the issue: When all six monitors are enabled, I don't see any data from [file-monitor5] or [file-monitor6]. If I disable 1-3, I start seeing logs from [file-monitor5], but not [file-monitor6]. I have to disable 1-5 to get logs from [file-monitor6]. The configuration for each monitor is shown below: [file-monitor1] [file-monitor2] [file-monitor3] [file-monitor4] [file-monitor5] [file-monitor6] I'm wondering if Splunk doesn't monitor all inputs at the same time or if it ingests monitored files based on timestamp, getting the earliest file in each folder.  Here's my current config for the monitors: [file-monitor1://C:\example] whitelist=.log$|.LOG$ sourcetype=ex-type queue=parsingQueue index=test disabled=false Can anyone provide insight into what might be causing the inconsistent results and what I can do to improve the ingestion process?
Sometimes I run a really complex query and accumulate results in a lookup table.  I recently tried doing this and including a sparkline, which gave me a field that looked like trend ##__SPARKL... See more...
Sometimes I run a really complex query and accumulate results in a lookup table.  I recently tried doing this and including a sparkline, which gave me a field that looked like trend ##__SPARKLINE__##,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,63,55,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0   If I just run "|inputlookup" to visualize that data, I just get the raw data back.  Is there a command that turns the stored sparkline data back into a sparkline?
Hi, I have injected NATS stream details in json format to the splunk and it look below. Wanted to extract key value pair from it. Any help is appreciated. Thanks in advance! looking to extract va... See more...
Hi, I have injected NATS stream details in json format to the splunk and it look below. Wanted to extract key value pair from it. Any help is appreciated. Thanks in advance! looking to extract values of below key - messages bytes first_seq first_ts last_seq last_ts consumer_count   JSON format -   {   "config": {     "name": "test-validation-stream",     "subjects": [       "test.\u003e"     ],     "retention": "limits",     "max_consumers": -1,     "max_msgs_per_subject": -1,     "max_msgs": 10000,     "max_bytes": 104857600,     "max_age": 3600000000000,     "max_msg_size": 10485760,     "storage": "file",     "discard": "old",     "num_replicas": 3,     "duplicate_window": 120000000000,     "sealed": false,     "deny_delete": false,     "deny_purge": false,     "allow_rollup_hdrs": false,     "allow_direct": false,     "mirror_direct": false   },   "created": "2023-02-14T19:26:42.663470573Z",   "state": {     "messages": 0,     "bytes": 0,     "first_seq": 39482101,     "first_ts": "1970-01-01T00:00:00Z",     "last_seq": 39482100,     "last_ts": "2023-03-18T03:10:35.6728279Z",     "consumer_count": 105   },   "cluster": {     "name": "cluster",     "leader": "server0.mastercard.int",     "replicas": [       {         "name": "server1",         "current": true,         "active": 387623412       },       {         "name": "server2",         "current": true,         "active": 387434624       }     ]   } }
Hi, I have a particular service which we triggered occasionally and I would like to know the earliest time of every time it gets kick off for e.g  For e.g following is the data: _time ser... See more...
Hi, I have a particular service which we triggered occasionally and I would like to know the earliest time of every time it gets kick off for e.g  For e.g following is the data: _time service message Host 2022-07-08T05:47:22.029Z abc calling service 123 host123.com 2022-07-08T05:49:17.029Z abc Talking to service 123 host123.com 2022-10-11T01:00:39.029Z abc calling service 123 host123.com 2022-10-11T01:02:46.029Z abc Talking to service 123 host123.com   The expected data outcome would be: Host starting_time host123.com 2022-07-08T05:47:22.029Z host123.com 2022-10-11T01:00:39.029Z   I am aware I have to use streamstats somewhere. But given all the other fields are identical earliest time by host wont work. Also I am backdating the data for 6 months so I need something that is bit efficient. I only care about starting_time of the service of each time the service starts.
Hi, I am exporting my SAS server but it's splitting one big event to multiple small events with identical timestamp. I want to combine these small events to one event in splunk (index_time/search_tim... See more...
Hi, I am exporting my SAS server but it's splitting one big event to multiple small events with identical timestamp. I want to combine these small events to one event in splunk (index_time/search_time) . Please refer to the below _raw log. 2021-09-16T14:56:13,979 INFO [00000003] :sas - NOTE: Unable to open SASUSER.PROFILE. WORK.PROFILE will be opened instead. 2021-09-16T14:56:13,980 INFO [00000003] :sas - NOTE: All profile changes will be lost at the end of the session. 2021-09-16T14:56:13,980 INFO [00000003] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: Copyright (c) 2016 by SAS Institute Inc., Cary, NC, USA. 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: SAS (r) Proprietary Software 9.4 (TS1M7) 2021-09-16T14:56:14,003 INFO [00000006] :sas - Licensed to MSF -SI TECH DATA (DMA DEV), Site 70251144. 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: This session is executing on the Linux 3.10.0-1160.83.1.el7.x86_64 (LIN X64) platform. 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: Additional host information: 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - Linux LIN X64 3.10.0-1160.83.1.el7.x86_64 #1 SMP Mon Dec 19 10:44:06 UTC 2022 x86_64 Red Hat Enterprise Linux Server release 7.9 (Maipo) 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,006 INFO [00000006] :sas - You are running SAS 9. Some SAS 8 files will be automatically converted 2021-09-16T14:56:14,007 INFO [00000006] :sas - by the V9 engine; others are incompatible. Please see 2021-09-16T14:56:14,007 INFO [00000006] :sas - http://support.sas.com/rnd/migration/planning/platform/64bit.html 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - PROC MIGRATE will preserve current SAS file attributes and is 2021-09-16T14:56:14,007 INFO [00000006] :sas - recommended for converting all your SAS libraries from any 2021-09-16T14:56:14,007 INFO [00000006] :sas - SAS 8 release to SAS 9. For details and examples, please see 2021-09-16T14:56:14,007 INFO [00000006] :sas - http://support.sas.com/rnd/migration/index.html 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - This message is contained in the SAS news file, and is presented upon 2021-09-16T14:56:14,007 INFO [00000006] :sas - initialization. Edit the file "news" in the "misc/base" directory to 2021-09-16T14:56:14,007 INFO [00000006] :sas - display site-specific news and information in the program log. 2021-09-16T14:56:14,007 INFO [00000006] :sas - The command line option "-nonews" will prevent this display. 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,008 INFO [00000006] :sas - NOTE: SAS initialization used: 2021-09-16T14:56:14,008 INFO [00000006] :sas - real time 0.19 seconds 2021-09-16T14:56:14,008 INFO [00000006] :sas - cpu time 0.08 seconds 2021-09-16T14:56:14,008 INFO [00000006] :sas - 2021-09-16T14:56:14,331 INFO [00000005] :sas - SAH011001I SAS Metadata Server (8561), State, starting 2021-09-16T14:56:14,362 INFO [00000009] :sas - The maximum number of cluster nodes was set to 8 as a result of the OMA.MAXIMUM_CLUSTER_NODES option. 2021-09-16T14:56:14,362 INFO [00000009] :sas - OMACONFIG option 1 found with value OMA.SASSEC_LOCAL_PW_SAVE and processed. 2021-09-16T14:56:15,160 INFO [00000009] :sas - Using AES with 64-bit salt and 10000 iterations for password storage. 2021-09-16T14:56:15,160 INFO [00000009] :sas - Using SASPROPRIETARY for password fetch. 2021-09-16T14:56:15,160 INFO [00000009] :sas - Using SHA-256 with 64-bit salt and 10000 iterations for password hash. 2021-09-16T14:56:15,169 INFO [00000009] :sas - SAS Metadata Authorization Facility Initialization. 2021-09-16T14:56:15,169 INFO [00000009] :sas - SAS is an adminUser. 2021-09-16T14:56:15,169 INFO [00000009] :sas - SASTRUST@SASPWI is a trustedUser. 2021-09-16T14:56:15,170 INFO [00000009] :sas - SASADM@SASPWI is an unrestricted adminUser. Thanks in advance.
Hello,  I have a CSV file with 2 fields. (field1,field2). The file is monitored and the content is indexed however the content of the file is updated on a daily basis and I want to index only the cha... See more...
Hello,  I have a CSV file with 2 fields. (field1,field2). The file is monitored and the content is indexed however the content of the file is updated on a daily basis and I want to index only the changes of the file.  Example :  Day 1  abcd,100122 abde,100122 abcdf,100122   Day 2 (where the last 2 lines are new in the csv file and needs to be ingested) abcd,100122 abde,100122 abcdf,100122 bcda,100222 bcdb,100222    
I wrote a simple macro for a string builder for a full name when passed params for FirstName, MiddleName, and LastName.;  first screenshot - the macro definition  I can pass the values explicitly ... See more...
I wrote a simple macro for a string builder for a full name when passed params for FirstName, MiddleName, and LastName.;  first screenshot - the macro definition  I can pass the values explicitly to the macro but not by reference from the query that invokes the macro;  second screenshot shows the behavior of the macro both when I explicitly pass the values to it and when I attempt to do so by reference when "Use eval-based definition": is NOT checked. If I DO check the box for "Use eval-based definition",  I get the following error: "Error in 'SearchParser': The definition of macro 'CRE_getFullNameTEST(3)' is expected to be an eval expression that returns a string." What do I have to do to be able to pass the values contained within FirstName, MiddleName, and LastName to my macro? Thanks for any assistance with this. Macro definition SPL that invokes macro
Our KVStore, which is wiredTiger, slowly grows and consumes the entire cache and then will eventually grow outside the cache until restarted. It requires rolling restarts about once a week, and has p... See more...
Our KVStore, which is wiredTiger, slowly grows and consumes the entire cache and then will eventually grow outside the cache until restarted. It requires rolling restarts about once a week, and has persisted for months. Anyone else had this issue?
Ready to level up your skills with Ingest Actions? It’s not just about filtering, masking, and routing data - using Ingest Actions enables you to optimize costs and achieve greater efficiencies i... See more...
Ready to level up your skills with Ingest Actions? It’s not just about filtering, masking, and routing data - using Ingest Actions enables you to optimize costs and achieve greater efficiencies in data transformation. Learn about: Large scale architecture when using Ingest Actions RegEx performance considerations without shooting yourself in the foot Leverage Ingest Actions when you don’t want to spend a ton of compute resources on screening every single event in a stream Popular ways to use eval in your rulesets Learn about the latest features added since launch