All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunk Community, I am trying to remove the data in a field after the first period. my field looks like this: 24611_abce.XXX.AAA.com 24612_r1q2e3.XXX.AAA.com null null ... See more...
Hi Splunk Community, I am trying to remove the data in a field after the first period. my field looks like this: 24611_abce.XXX.AAA.com 24612_r1q2e3.XXX.AAA.com null null 4iop45_q7w8e9.XXX.AAA.com hki90lhf3_m1n2b3.QQQQ.AAA.com   I would like to remove everything after the first period for every row but the pattterns at the end do not match after the first period. It should look like this:  24611_abce 24612_r1q2e3 null null 4iop45_q7w8e9 hki90lhf3_m1n2b3   thanks in advance!
This seems to me like it should be super simple (looker, tableau, etc) but I've been working at this for almost 2 days and I'm getting nowhere, I would be very appreciative if anyone could help. I'm... See more...
This seems to me like it should be super simple (looker, tableau, etc) but I've been working at this for almost 2 days and I'm getting nowhere, I would be very appreciative if anyone could help. I'm trying to get: Chart the percentage difference between count of _time (ie. count of records) and a simple moving average of the last 5 days on the Y axis and time (spans) on the X, where response_code>200 by path I'll paste an example of where I'm at, but I know I'm not even close. Can I get any tips please?     index=k8s_events namespace=ecom-middleware NOT method=OPTIONS response_code>200 | streamstats avg(count(_time)) as cTime window=5 | table _time path cTime | timechart usenull=f span=8h avg(cTime) By path    
We are installing a custom made app that contains some symlinks, but were having the following problem: Installing it from the web GUI removes all the links breaking the app. But installing it by c... See more...
We are installing a custom made app that contains some symlinks, but were having the following problem: Installing it from the web GUI removes all the links breaking the app. But installing it by copying the app dir into '$SPLUNK_HOME/etc/apps'  keeps the symlinks intact, and the app works. Is this intended behaviour? Is there anyway to use symlinks inside an app? Regards, Javier.
Dear all, We have a controller C1  in which database D1 is cataloged and using agent A1.  I need to see D1 from a new controller called C2 but I couldn't see agent A1 in C2. Can someone help ... See more...
Dear all, We have a controller C1  in which database D1 is cataloged and using agent A1.  I need to see D1 from a new controller called C2 but I couldn't see agent A1 in C2. Can someone help me with this? ^ Post edited by @Ryan.Paredez for formatting
The latest version of the Splunk Add-on for AWS has changed the JSON for the "AWS Description" ingest; see examples below. My question is about selecting values from this new 'type' of array. Before... See more...
The latest version of the Splunk Add-on for AWS has changed the JSON for the "AWS Description" ingest; see examples below. My question is about selecting values from this new 'type' of array. Before, you could select particular values with the following search syntax: tags.Name = "server1" QUESTIONS 1. How do I make the same search with the newer JSON? 2. What is the technical description for these 2 different forms of arrays? BEFORE tags: { [-]      Environment: test      Name: server1 AFTER Tags: [ [-]      { [-]        Key: Environment        Value: test      }      { [-]        Key: Name        Value: server1      }
EventGen v7.2.1 throws the following exception - Python 3.9.2 DockerImage: nginx eventgen 2022-04-06 15:08:42 eventgen ERROR MainProcess Unexpected character in found when decoding ... See more...
EventGen v7.2.1 throws the following exception - Python 3.9.2 DockerImage: nginx eventgen 2022-04-06 15:08:42 eventgen ERROR MainProcess Unexpected character in found when decoding object value Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 136, in updateConfig self.httpeventServers = json.loads(config.httpeventServers) ValueError: Unexpected character in found when decoding object value 2022-04-06 15:08:42 eventgen ERROR MainProcess 'HTTPEventOutputPlugin' object has no attribute 'serverPool' Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 250, in _sendHTTPEvents self._transmitEvents(stringpayload) File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 261, in _transmitEvents targetServer.append(random.choice(self.serverPool)) AttributeError: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' 2022-04-06 15:08:42 eventgen ERROR MainProcess failed indexing events, reason: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' .conf file [cyclical.csv] mode=sample interval=60 count=1 outputMode=httpevent httpeventServers = {"servers": [{"protocol": "https", "port": "8088", "key": "0617eea5-87a9-4d18-8ed4-6dc085ddbe2c"", "address": "172.19.15.140"}]} index=main sourcetype=eventgen sampletype=csv source=eventgen_cyclical I am running it like this v7.2.1 throws the following exception. Python:3.9.2 DockerImage:nginx eventgen 2022-04-06 15:08:42 eventgen ERROR MainProcess Unexpected character in found when decoding object value Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 136, in updateConfig self.httpeventServers = json.loads(config.httpeventServers) ValueError: Unexpected character in found when decoding object value 2022-04-06 15:08:42 eventgen ERROR MainProcess 'HTTPEventOutputPlugin' object has no attribute 'serverPool' Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 250, in _sendHTTPEvents self._transmitEvents(stringpayload) File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 261, in _transmitEvents targetServer.append(random.choice(self.serverPool)) AttributeError: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' 2022-04-06 15:08:42 eventgen ERROR MainProcess failed indexing events, reason: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' EventGen conf file [cyclical.csv] mode=sample interval=60 count=1 outputMode=httpevent httpeventServers = {"servers": [{"protocol": "https", "port": "8088", "key": "0617eea5-87a9-4d18-8ed4-6dc085ddbe2c"", "address": "172.19.15.140"}]} index=main sourcetype=eventgen sampletype=csv source=eventgen_cyclical Running it using python3 -m splunk_eventgen -v generate -s cyclical.csv eventgen.conf
Hi All, I have to send Splunk Cloud logs to S3 buckets after the 90 days log retention in Splunk for audit purpose. Can someone point me how to achieve this and if there are any documentation for su... See more...
Hi All, I have to send Splunk Cloud logs to S3 buckets after the 90 days log retention in Splunk for audit purpose. Can someone point me how to achieve this and if there are any documentation for such please let me know?    Thanks in Advance!
Hello, We had an issue where where a DB Input we have fell behind in fetching the events.  We seen that a few days ago the "Input Jobs Median Duration over Time"  chart on the "DB Connect Input Per... See more...
Hello, We had an issue where where a DB Input we have fell behind in fetching the events.  We seen that a few days ago the "Input Jobs Median Duration over Time"  chart on the "DB Connect Input Performance" Dashboard went from 0 to over 200. Is there a Search that can be done to obtain the Median of duration?  I would love to create an alert for if this happens again.    
With little to no Splunk experience, I inherited a 7.2.3 windows deployment (We're closed network and I'm not cleared to upgrade yet) I've been finding little things here and there. One of the bigg... See more...
With little to no Splunk experience, I inherited a 7.2.3 windows deployment (We're closed network and I'm not cleared to upgrade yet) I've been finding little things here and there. One of the bigger ones being I'm ONLY getting _Audit logs from the Splunk servers; I'm not getting any audit input from any work stations, or other production servers. I've been dredging the boards for 3 days now and haven't found anything that seems along this line. I've checked the %Splunk\var\log\audit.log on several and the host's audit logs are getting input, but they're not getting ingested. I've gone through the deployment_app input.conf and output.conf files and don't see any glaring indications. So, I'm asking for ideas on other things to check.
Our addon app has several binary files in its bin/ directory. The check_for_binary_files_without_source_code check fails for them, but I discovered 2 things: 1. When packaging the addon using the A... See more...
Our addon app has several binary files in its bin/ directory. The check_for_binary_files_without_source_code check fails for them, but I discovered 2 things: 1. When packaging the addon using the Addon Builder app, the README.txt file gets modified with extra content like the following: # Binary File Declaration /opt/splunk/var/data/tabuilder/package/TA-luminar-iocs-and-leaked-credentials/bin/ta_luminar_iocs_and_leaked_credentials/aob_py3/pvectorc.cpython-37m-x86_64-linux-gnu.so: this file does not require any source code Having these segments in the README.txt file causes the check to omit the given binary file. 2. I tried looking for details about this README.txt behavior, but the only thing I was able to find was an old fork of what appears to be the code of the AppInspectchecks: https://github.com/splunkdevabhi/appinspect/blob/master/splunk_appinspect/checks/check_cloud_simple_app.py In particular the conditional logic related to this behavior is in lines 1827-1852. Is this use case for binary file descriptions in a README.txt file described in the official documentation? If not, can someone please add it?
Hello, I have an add-on in Splunk, that is supported in Splunk Cloud. Recently with one of our customers, the installation failed because the add-on could not find globalConfig.json:     04... See more...
Hello, I have an add-on in Splunk, that is supported in Splunk Cloud. Recently with one of our customers, the installation failed because the add-on could not find globalConfig.json:     04-05-2022 14:10:59.731 +0000 ERROR ExecProcessor [31306 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/island_audits_input.py" FileNotFoundError: [Errno 2] No such file or directory: '/opt/splunk/etc/apps/TA-island-add-on-for-splunk/appserver/static/js/build/globalConfig.json' 2022-04-05 14:10:59,730 ERROR pid=12768 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/ta_island_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 113, in stream_events self.parse_input_args(input_definition) File "/opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/ta_island_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 154, in parse_input_args self._parse_input_args_from_global_config(inputs) File "/opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/ta_island_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 166, in _parse_input_args_from_global_config with open(config_path) as f: FileNotFoundError: [Errno 2] No such file or directory: '/opt/splunk/etc/apps/TA-island-add-on-for-splunk/appserver/static/js/build/globalConfig.json'     Important to say, the app was built using the Splunk Add-on builder, and it seems this library throws the error. The client have a managed environment (they can't install apps themselves or to change the settings there). they have a Splunk 8.2 Classic. How can we resolve the errors? Thank you very much!
Hello would it be possible to deploy a universal forwarder that monitors the same log source twice and routes the data differently based on what i want to collect?  For example, two different apps ... See more...
Hello would it be possible to deploy a universal forwarder that monitors the same log source twice and routes the data differently based on what i want to collect?  For example, two different apps on the UF  App1:  Inputs.conf - All winevent logs (including security) Outputs.conf - Going to indexer App2:  Inputs.conf - WinEventSecurity logs (whitelisted for two events)  Outputs.conf - Seperate destination   Would there be any issues reading the security log twice or would there be confliction?
Hi All, I want help to use where clause in eval command: below is lookup data: ID  expense year 1     10          2021 2     20          2020 3     10          2021 4     30          20... See more...
Hi All, I want help to use where clause in eval command: below is lookup data: ID  expense year 1     10          2021 2     20          2020 3     10          2021 4     30          2019 5     20          2020 eval a = sum(expense) by ID, year where ID IN(1,3) eval b= sum(expense) by ID, year  where ID IN(2,4) eval c= sum(expense) by ID, year where ID IN(1,2,3,4)  [excluding few ID's from the search]   can someone help me to get this. I tried join to have these values as a subsearch but not able to get it.   thanks.  
We have a lookup table that contains IDs for specific devices grouped by location IDs. This has been working great for emailing alerts when specific thresholds trigger alerts.   However we want t... See more...
We have a lookup table that contains IDs for specific devices grouped by location IDs. This has been working great for emailing alerts when specific thresholds trigger alerts.   However we want to be able to trigger sms messages when an alert for a individual location ID takes place.   We find that we can set this up with the TWILLIO alert action but we can only specify one phone number.   Is there a way to do this using VictorOps, Twillio or is a modular script the best method?
is there any splunk query to fetch vmware snapshots ? VM snapshot was created 6 months ago during a change activity but not deleted after the change. Snapshot file grown huge and caused performance... See more...
is there any splunk query to fetch vmware snapshots ? VM snapshot was created 6 months ago during a change activity but not deleted after the change. Snapshot file grown huge and caused performance impact. To Prevent: - Check the feasibility in Splunk to detect snapshots of VMs which runs more than a week . If it is detected, Splunk to create automatic incident to technical support team (Windows / Unix / App owner) Note, already all vCenters integrated in splunk for logs and performance monitoring.
Hi Team, I am getting very frequent alert for one of my search peer from DMC even though search head is up and working fine and i have analyzed the logs but i could not find anything abnormal in th... See more...
Hi Team, I am getting very frequent alert for one of my search peer from DMC even though search head is up and working fine and i have analyzed the logs but i could not find anything abnormal in the logs except script runner error.  Can you please assist me on this issue
Good Morning All,   I'm having a hard time moving the entire C:\Program Files\Splunk folder to a new system. I've seen the "guide" online but it just says move the Splunk Home folder. IS this the... See more...
Good Morning All,   I'm having a hard time moving the entire C:\Program Files\Splunk folder to a new system. I've seen the "guide" online but it just says move the Splunk Home folder. IS this the same thing as the entire Splunk folder?   My main goal is to get the old logs showing up on the new system. The  C:\Program Files\Splunk folder is about 100 gigabytes. I receive an error when trying to zip the folder or transfer to a NAT. (server.pem not allowed).    Anyone have a step-by-step on what I need to do for this to work? Do I just need to transfer a particular folder? I only use the default/main index for data. I'm on version 6.xx for enterprise.    Please help!  - Kevin
Hi all, I have some value under src fields as below, but it has some problems. For example, actually <1b5a.4.d576d0e8-5fbb-4739-a4c7-6dfbc1a4fd2e@avo-sv.one.com> and 1b5a.4.79406b4a-9326-41b2-94cc-2... See more...
Hi all, I have some value under src fields as below, but it has some problems. For example, actually <1b5a.4.d576d0e8-5fbb-4739-a4c7-6dfbc1a4fd2e@avo-sv.one.com> and 1b5a.4.79406b4a-9326-41b2-94cc-2626e10ea6f6@avo-sv.one.com are same & i have multiple src with same issue. I want to remove all "< >" if the string has, so there will not be duplicates. Can anyone help me on this? Thank you.
This can be handy for dumping a list of installed ES correlation searches with disabled status, description, frameworks etc.  Be sure your use has the permissions to all knowledge objects if you don'... See more...
This can be handy for dumping a list of installed ES correlation searches with disabled status, description, frameworks etc.  Be sure your use has the permissions to all knowledge objects if you don't see any you are know are present in an app context. | rest splunk_server=local count=0 /servicesNS/-/-/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rename title as search_name, eai:acl.app as app, action.correlationsearch.annotations as frameworks | table search_name, app, description, frameworks, disabled | spath input=frameworks | rename mitre_attack{} as mitre_attack, nist{} as nist, cis20{} as cis20, kill_chain_phases{} as kill_chain_phases | table app, search_name, description, disabled, cis, kill_chain_phases, nist, mitre_attack
Is there a way to test index-time operations without indexing logs? For example, is there a way I can provide a sample log file and see what the timestamp, host, sourcetype, source, and output after ... See more...
Is there a way to test index-time operations without indexing logs? For example, is there a way I can provide a sample log file and see what the timestamp, host, sourcetype, source, and output after other operations like null-queuing would be? For example, I currently use the "Add Data" section to test timestamping and line-breaking, but this doesn't show other metadata or what will be ingested after null-queuing. I also setup a quick bash command to make copies of the base log samples and have inputs continuously monitor the new files as I'm testing new sourcetypes. I feel like this is a bit inefficient. Thanks in advance for any input!