All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everybody, I would like to duplicate data coming from my sourcetype in such a way: - send the original data to Splunk for indexing. - send the duplicated events to an external server with "<... See more...
Hi everybody, I would like to duplicate data coming from my sourcetype in such a way: - send the original data to Splunk for indexing. - send the duplicated events to an external server with "<DNS>" prefix string. How should I modify the transform.conf file in order to do that? Another question: is there a better way to forwards logs to external server while keeping the original source  host (source IP) instead of adding prefixes like what I'm trying to do.  Thanks in advance, Angelo
Hi everybody,   I've been struggling for hours to install splunks universal forwarder on windows server 2022. Here's the msiexec logs : https://drive.google.com/file/d/1NtNN9mT97-gbwprIc4cCAec5... See more...
Hi everybody,   I've been struggling for hours to install splunks universal forwarder on windows server 2022. Here's the msiexec logs : https://drive.google.com/file/d/1NtNN9mT97-gbwprIc4cCAec5mi7Jhl6H/view?usp=sharing  Help
We have been trying to address a problem that exists between our Splunk deployment and AWS Firehose, namely that Firehose adds 250 bytes of useless JSON wrapper to all log events (which when multipli... See more...
We have been trying to address a problem that exists between our Splunk deployment and AWS Firehose, namely that Firehose adds 250 bytes of useless JSON wrapper to all log events (which when multiplied by millions/billions events increases our storage and license costs enormously). In order to address this we turned to a combination of INGEST_EVAL on our heavy forwarders which will: 1. Strip the JSON envelope from the event 2. Unescape all of the JSON quotes in the actual log data, making it parse-able JSON once again 3. Assign the logStream/logGroup values to host/source respectively This is somewhat working and when we look in Splunk it appears our events are showing up with all the appropriate fluff removed... so for example this is what our events used to look like (logGroup, logStream, message and timestamp are all added values from AWS Firehose): After the processing they now look like this: As you can see, the event is much much smaller without losing any necessary information. However, to our surprise this has not had any impact on ingestion levels. It seems to be exactly the same. We also noticed that all of these fields, even though they do not appear in the event view, are actually available and indexed in the "interesting fields" area, which seems to explain why our ingestion/storage has not decreased at all: For reference, these are the props/transforms I'm using to accomplish this: Props.conf:       [source::http:AWS2Splunk] TRANSFORMS-hostname = changehost TRANSFORMS-sourceinfo = changesource priority = 100 [aws:firehose:json] priority = 1000 TRANSFORMS-stripfirehosewrapper = stripfirehosewrapper       Transforms.conf:       [changehost] DEST_KEY = MetaData:Host REGEX = \,.logStream...([^\"]+)\"\,\"timestamp FORMAT = host::$1 [changesource] DEST_KEY = MetaData:Source REGEX = \,.logGroup...([^\"]+)\"\,\"logStream FORMAT = source::$1 [stripfirehosewrapper] INGEST_EVAL = _raw=replace(replace(replace(replace(replace(_raw,"\{\"message\"\:\"",""),"..\"logGroup\"\:\".*",""),"\\\\\"","\""),"\\\{2}","\\"),"\"stream\":\"\w+\"\,","")         Anyone have any thoughts as to what we're doing wrong? Is this possibly a conflict with doing a DEST_KEY before INGEST_EVAL? Will these two not necessarily play nice together? UPDATE: I changed from DEST_KEY to using INGEST_EVAL completely... still seems to be the same issue   [changehost] #DEST_KEY = MetaData:Host #REGEX = \,.logStream...([^\"]+)\"\,\"timestamp #FORMAT = host::$1 INGEST_EVAL = host=replace(replace(_raw,".*\"\,\"logStream\"\:\"",""),"\"\,\"timestamp\".*","") [changesource] #DEST_KEY = MetaData:Source #REGEX = \,.logGroup...([^\"]+)\"\,\"logStream #FORMAT = source::$1 INGEST_EVAL = source=replace(replace(_raw,".*logGroup\"\:\"",""),"\"\,\"logStream\"\:.*","")    
Hi, I'm filtering a search to get a result for a specific values by checking it manually this way: .... | stats sum(val) as vals by value | where value="v1" OR value="v2" OR value="v3" I'm w... See more...
Hi, I'm filtering a search to get a result for a specific values by checking it manually this way: .... | stats sum(val) as vals by value | where value="v1" OR value="v2" OR value="v3" I'm wondering if it is possible to do the same by checking if the value exists in a list coming from another index: (something like this) .... | append [search index=another_index | stats values(remote_value) as values_list] | stats sum(val) as vals by value | where (value in values_list)
Hi everyone, Im trying to install appdynamics saas agent in a cointainer running on a graviton linux instance (ARM64). Someone have done this before ? Or is not possible to run in linux arm64 ? Tha... See more...
Hi everyone, Im trying to install appdynamics saas agent in a cointainer running on a graviton linux instance (ARM64). Someone have done this before ? Or is not possible to run in linux arm64 ? Thanks!
We are using splunk in Enterprise environemnt with Very large scale operation.  Management decided to address the question why is the above mentioned app included in the package? We are not using... See more...
We are using splunk in Enterprise environemnt with Very large scale operation.  Management decided to address the question why is the above mentioned app included in the package? We are not using it and would like not to have it in the future.
Hello, I need to ingest Cynet XDR audit and alert events into Splunk Cloud solution but can not find a procedure docs. Neither in Cynet nor in Splunk. Does someone know the how-to or point me to a ... See more...
Hello, I need to ingest Cynet XDR audit and alert events into Splunk Cloud solution but can not find a procedure docs. Neither in Cynet nor in Splunk. Does someone know the how-to or point me to a starting point? Thank you  
Before upgrading in production can I ask for update notes , changelogs of Splunk Add-on for ServiceNow 7.4.1 >7.5.0 Thanks
Hello, I have a data model named firewall_logs with firewall data in which the interesting fields are: file_hash, url and source/dest IP. And I have a dataset named intel_indicators with column nam... See more...
Hello, I have a data model named firewall_logs with firewall data in which the interesting fields are: file_hash, url and source/dest IP. And I have a dataset named intel_indicators with column named ioc in which I have hashes, IPs, domains and timestamp.   What I want to do is to compare the data (hashes, IPs, domains) from ioc column with the fields: file_hash, url, dest_ip. If there is a match, it should be visible. Any ideea how I can accomplish this ?   | tstats summariesonly=t allow_old_summaries=t  ...interesting fields.... from datamodel="firewall_logs"  a and here I'm stuck
Hello I have python script just like this         #!/bin/python import os import json import datetime HOMEPATH = '/opt/monitor_dirs/SomeDir' def path_to_dict(path, depth = 1, first = Fa... See more...
Hello I have python script just like this         #!/bin/python import os import json import datetime HOMEPATH = '/opt/monitor_dirs/SomeDir' def path_to_dict(path, depth = 1, first = False): for base, dirs, files in os.walk(path): r = {'name': base, 'dirs': len(dirs), 'files': len(files)} if first: r['datetime'] = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S%z") if depth > 0: r['subdirs'] = {} for subdir in dirs: r['subdirs'][subdir] = path_to_dict(os.path.join(path, subdir), depth - 1); return r #print path_to_dict(HOMEPATH, 1) result = path_to_dict(HOMEPATH, 1, True) if result: print (json.dumps(result, sort_keys=True, indent=4))           And i have output          # ./file_count.py { "datetime": "2023-02-22T21:10:49", "dirs": 9, "files": 0, "name": "/opt/monitor_dirs/SomeDir", "subdirs": { "XXXX": { "dirs": 0, "files": 63, "name": "/opt/monitor_dirs/XXXX" } } }             There is some problem in Index I have 2 event instead just only one 1. { 2.  ""datetime": "2023-02-22T21:10:49", "dirs": 9, "files": 0, and so on, but there is no '{'   How i can get only one event with my JSON  
Greetings, I'm finally tackling the topic of data models within my organization, and am coming across situations I am needing to solve for. 1. Windows authentication data which has a null values in... See more...
Greetings, I'm finally tackling the topic of data models within my organization, and am coming across situations I am needing to solve for. 1. Windows authentication data which has a null values in the src field, due to the type of authentication taking place. I understand that field aliasing comes into play, and I tried that - however, I tried aliasing a calculated field, which does not work of course. Now, I am having to go back to see if there is another field I can alias instead. I guess my ask with this post here is to get some strategies from other Splunk users who have tackled data cleanup and data models. Are null values acceptable for certain situations? Or, must every required data model field be complete? Such as action, app, dest, src, user etc.? I appreciate some feedback regarding this topic.
Hi, I am trying to monitor many exchange servers that are not configured the same. I was giving the paths to monitor containing an environment variable, such as  %ExchangeInstallPath%TransportRole... See more...
Hi, I am trying to monitor many exchange servers that are not configured the same. I was giving the paths to monitor containing an environment variable, such as  %ExchangeInstallPath%TransportRoles\Logs\FrontEnd\AgentLog\* Assuming splunkd runs under a user that can read the windows variable.   Is it possible to monitor like this?   [monitor://%ExchangeInstallPath%TransportRoles\Logs\FrontEnd\AgentLog]    Or   [monitor://$ExchangeInstallPath\TransportRoles\Logs\FrontEnd\AgentLog]     Being able to do this will prevent having to create multiple stanzas with different drives, like    [monitor://C:\Program Files\Microsoft\Exchange Server\...\TransportRoles\Logs\FrontEnd\AgentLog\*] [monitor://D:\Program Files\Microsoft\Exchange Server\...\TransportRoles\Logs\FrontEnd\AgentLog\*] [monitor://E:\Program Files\Microsoft\Exchange Server\...\TransportRoles\Logs\FrontEnd\AgentLog\*]     If there are any other suggests (other than the obvious, like standardizing installs) please advise. Thank you
Hello, Please help me identify my issue maybe I'm missing something I don't see. I created simple powershell script to get data from Certificate Authority server (using certutil command) then packa... See more...
Hello, Please help me identify my issue maybe I'm missing something I don't see. I created simple powershell script to get data from Certificate Authority server (using certutil command) then package as a splunk application. After I deployed the app in CA server with Splunk installed, then executed the script manually from powershell ISE, I can see I have an output from console. But during scheduled execution, there's no data in my index. No error in internal logs so I can't identify where is the issue. Any feedback will help. thanks. Also I already tried other workaround in other thread, still didn't work. (like using .path file, powershell stanza etc..)   My .bat file @ECHO OFF Powershell.exe -executionpolicy remotesigned -File "%~dpn0.ps1" inputs.conf [script://.\bin\scripts\get_ca_issued_certs.bat] disabled = 0 index = cert_authority_idx sourcetype = ca_issued_certs interval = 300 Internal logs: 5:41:24.397 AM   02-22-2023 05:41:24.397 -0800 INFO ExecProcessor [6372 ExecProcessor] - New scheduled exec process: "C:\Program Files\Splunk\etc\apps\cert_authority_win_uf\bin\scripts\get_ca_issued_certs.bat"   Output when manually executed. Date=2023-02-22_06:02:00_-08:00,object=Cert Authority,counter=Issued Certs Expiry,RequestID=4,RequesterName=NT AUTHORI TY\IUSR,SerialNumber=2a0000000455e56fc1482ef85f000000000004,NotAfter=2/21/2024 7:37 AM,Value=364 Date=2023-02-22_06:02:00_-08:00,object=Cert Authority,counter=Issued Certs Expiry,RequestID=5,RequesterName=NT AUTHORI TY\IUSR,SerialNumber=2a000000052914506fdbd37f24000000000005,NotAfter=2/21/2024 7:39 AM,Value=364
Hi Folks,   I have a SHC 3 members  with splunk ES, currently when the ES trigger a notable, the notable trigger 3 times the throttling is correctly configured.   By my opinion the SHC out of syn... See more...
Hi Folks,   I have a SHC 3 members  with splunk ES, currently when the ES trigger a notable, the notable trigger 3 times the throttling is correctly configured.   By my opinion the SHC out of sync do you have any suggestions?   Regards
Our O365 API keys are expiring, and we are attempting to updated them. While doing so we have a couple questions. Are there different Splunk instances for the search head, indexer, and data manager... See more...
Our O365 API keys are expiring, and we are attempting to updated them. While doing so we have a couple questions. Are there different Splunk instances for the search head, indexer, and data manager? If yes, what are the URLs. We are having difficulty locating a knowledge base on how to update the API keys. Could you please provide the relevant knowledge base? Thanks
I'm trying to create a drilldown for a single value panel. I want my user to be able to click on the value, and it will load new panel with all details. I have set token but not sure where to pass in... See more...
I'm trying to create a drilldown for a single value panel. I want my user to be able to click on the value, and it will load new panel with all details. I have set token but not sure where to pass in detailed panel so that drill down works. PFA : here is my single value panel query | eval A = if(DURATION>30, "Long Duration Jobs","Duration") | stats count by A | where A="Long Duration Jobs" ----  i have another panel which shows details of these long duration jobs: | eval Duration = if(DURATION>30, "Long Duration Jobs", "Duration") | search Duration = "Long Duration Jobs" | rename EXEC_DATE_TIME as Datetime SERVER_NAME as "System Name" JOB_NAME as "Job Name" STATUS_NAME as "Status" EXEC_DATETIME as "Execution Datetime" DURATION as "Duration(s)" DELAY as "Dealy(s)" JOB_COUNT as "Job Count" JBCREATED_BY as "Job Createdby " SDL_DATETIME as "SDL Datetime" | table Datetime "System Name" "Job Name" "Execution Datetime" "Status" Duration "Duration(s)" "Dealy(s)" "Job Count" "Job Createdby" "SDL Datetime" How to connect these 2 panles so that when i click on single value, the detailed panel should pop up. Please suggest        
Hi Team,   We are planning to migrate the heavy forwarders to the new servers. We have some apps in the Heavy forwarder like dbconnect  Question:  Any Prechecks needs to before migrate and any ... See more...
Hi Team,   We are planning to migrate the heavy forwarders to the new servers. We have some apps in the Heavy forwarder like dbconnect  Question:  Any Prechecks needs to before migrate and any other changes we should ask app team to change because we are getting many inputs from the app team to the via HEC tokens and the dbconect. Can you please assist me on this task.   Thanks  
Hello Splunk Community, I followed different guides and docs for trying to install the Docker universal forwarder but none of them worked. When I try to execute the splunk binary the splunk in the ... See more...
Hello Splunk Community, I followed different guides and docs for trying to install the Docker universal forwarder but none of them worked. When I try to execute the splunk binary the splunk in the container appears trying to update itself and stucks: I ran the image with this docker-compose.yml:     version: '3.5' networks: splunk: name: splunk-test services: # Splunk Universal Forwarder: splunk-forwarder: container_name: uf1 image: splunk/universalforwarder:latest restart: always ports: - "9997:9997" volumes: - ./splunkforwarder-etc:/opt/splunkforwarder-etc - ./SPLUNK_HOME_DIR:/opt/splunkforwarder environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD=lwetem21 - SPLUNK_STANDALONE_URL=https://<MY Splunk Enterprise DNS Name>:8000 networks: - splunk     It stops with this output:     [splunk@8de54aed8c1f splunkforwarder]$ pwd /opt/splunkforwarder [splunk@8de54aed8c1f bin]$ ./splunk add forward-server idx1.mycompany.com:9997 Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" Error calling execve(): No such file or directory Error launching command: No such file or directory execvp: No such file or directory Do you agree with this license? [y/n]: y This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y -- Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2023-02-22.10-57-49' -- Migrating to: VERSION=9.0.4 BUILD=de405f4a7979 PRODUCT=splunk PLATFORM=Linux-x86_64 Error calling execve(): No such file or directory Error launching command: Invalid argument     The mentioned log btw is an empty file.   I pulled the latest image from: https://hub.docker.com/r/splunk/universalforwarder  https://kinneygroup.com/blog/splunk-universal-forwarders/   What am I doing wrong or there better guides to follow than the links that I have already provided.     With kind regards, CJ
Configuration is recognized but not applied. /opt/splunk/etc/apps/jk_cjbeck/local/props.conf SEDCMD-StripHeader = s/^[^{]+//
Are there any APIs for Splunkbase, I want to get the list of all apps available in Splunkbase with the below-mentioned information. 1. splunk app name 2. splunk folder name 3. app version 4. ... See more...
Are there any APIs for Splunkbase, I want to get the list of all apps available in Splunkbase with the below-mentioned information. 1. splunk app name 2. splunk folder name 3. app version 4. compatibility (like the app is compatible with Splunk version 7/8/9) 5. CIM compatibility