All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Our addon app has several binary files in its bin/ directory. The check_for_binary_files_without_source_code check fails for them, but I discovered 2 things: 1. When packaging the addon using the A... See more...
Our addon app has several binary files in its bin/ directory. The check_for_binary_files_without_source_code check fails for them, but I discovered 2 things: 1. When packaging the addon using the Addon Builder app, the README.txt file gets modified with extra content like the following: # Binary File Declaration /opt/splunk/var/data/tabuilder/package/TA-luminar-iocs-and-leaked-credentials/bin/ta_luminar_iocs_and_leaked_credentials/aob_py3/pvectorc.cpython-37m-x86_64-linux-gnu.so: this file does not require any source code Having these segments in the README.txt file causes the check to omit the given binary file. 2. I tried looking for details about this README.txt behavior, but the only thing I was able to find was an old fork of what appears to be the code of the AppInspectchecks: https://github.com/splunkdevabhi/appinspect/blob/master/splunk_appinspect/checks/check_cloud_simple_app.py In particular the conditional logic related to this behavior is in lines 1827-1852. Is this use case for binary file descriptions in a README.txt file described in the official documentation? If not, can someone please add it?
Hello, I have an add-on in Splunk, that is supported in Splunk Cloud. Recently with one of our customers, the installation failed because the add-on could not find globalConfig.json:     04... See more...
Hello, I have an add-on in Splunk, that is supported in Splunk Cloud. Recently with one of our customers, the installation failed because the add-on could not find globalConfig.json:     04-05-2022 14:10:59.731 +0000 ERROR ExecProcessor [31306 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/island_audits_input.py" FileNotFoundError: [Errno 2] No such file or directory: '/opt/splunk/etc/apps/TA-island-add-on-for-splunk/appserver/static/js/build/globalConfig.json' 2022-04-05 14:10:59,730 ERROR pid=12768 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/ta_island_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 113, in stream_events self.parse_input_args(input_definition) File "/opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/ta_island_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 154, in parse_input_args self._parse_input_args_from_global_config(inputs) File "/opt/splunk/etc/apps/TA-island-add-on-for-splunk/bin/ta_island_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 166, in _parse_input_args_from_global_config with open(config_path) as f: FileNotFoundError: [Errno 2] No such file or directory: '/opt/splunk/etc/apps/TA-island-add-on-for-splunk/appserver/static/js/build/globalConfig.json'     Important to say, the app was built using the Splunk Add-on builder, and it seems this library throws the error. The client have a managed environment (they can't install apps themselves or to change the settings there). they have a Splunk 8.2 Classic. How can we resolve the errors? Thank you very much!
Hello would it be possible to deploy a universal forwarder that monitors the same log source twice and routes the data differently based on what i want to collect?  For example, two different apps ... See more...
Hello would it be possible to deploy a universal forwarder that monitors the same log source twice and routes the data differently based on what i want to collect?  For example, two different apps on the UF  App1:  Inputs.conf - All winevent logs (including security) Outputs.conf - Going to indexer App2:  Inputs.conf - WinEventSecurity logs (whitelisted for two events)  Outputs.conf - Seperate destination   Would there be any issues reading the security log twice or would there be confliction?
Hi All, I want help to use where clause in eval command: below is lookup data: ID  expense year 1     10          2021 2     20          2020 3     10          2021 4     30          20... See more...
Hi All, I want help to use where clause in eval command: below is lookup data: ID  expense year 1     10          2021 2     20          2020 3     10          2021 4     30          2019 5     20          2020 eval a = sum(expense) by ID, year where ID IN(1,3) eval b= sum(expense) by ID, year  where ID IN(2,4) eval c= sum(expense) by ID, year where ID IN(1,2,3,4)  [excluding few ID's from the search]   can someone help me to get this. I tried join to have these values as a subsearch but not able to get it.   thanks.  
We have a lookup table that contains IDs for specific devices grouped by location IDs. This has been working great for emailing alerts when specific thresholds trigger alerts.   However we want t... See more...
We have a lookup table that contains IDs for specific devices grouped by location IDs. This has been working great for emailing alerts when specific thresholds trigger alerts.   However we want to be able to trigger sms messages when an alert for a individual location ID takes place.   We find that we can set this up with the TWILLIO alert action but we can only specify one phone number.   Is there a way to do this using VictorOps, Twillio or is a modular script the best method?
is there any splunk query to fetch vmware snapshots ? VM snapshot was created 6 months ago during a change activity but not deleted after the change. Snapshot file grown huge and caused performance... See more...
is there any splunk query to fetch vmware snapshots ? VM snapshot was created 6 months ago during a change activity but not deleted after the change. Snapshot file grown huge and caused performance impact. To Prevent: - Check the feasibility in Splunk to detect snapshots of VMs which runs more than a week . If it is detected, Splunk to create automatic incident to technical support team (Windows / Unix / App owner) Note, already all vCenters integrated in splunk for logs and performance monitoring.
Hi Team, I am getting very frequent alert for one of my search peer from DMC even though search head is up and working fine and i have analyzed the logs but i could not find anything abnormal in th... See more...
Hi Team, I am getting very frequent alert for one of my search peer from DMC even though search head is up and working fine and i have analyzed the logs but i could not find anything abnormal in the logs except script runner error.  Can you please assist me on this issue
Good Morning All,   I'm having a hard time moving the entire C:\Program Files\Splunk folder to a new system. I've seen the "guide" online but it just says move the Splunk Home folder. IS this the... See more...
Good Morning All,   I'm having a hard time moving the entire C:\Program Files\Splunk folder to a new system. I've seen the "guide" online but it just says move the Splunk Home folder. IS this the same thing as the entire Splunk folder?   My main goal is to get the old logs showing up on the new system. The  C:\Program Files\Splunk folder is about 100 gigabytes. I receive an error when trying to zip the folder or transfer to a NAT. (server.pem not allowed).    Anyone have a step-by-step on what I need to do for this to work? Do I just need to transfer a particular folder? I only use the default/main index for data. I'm on version 6.xx for enterprise.    Please help!  - Kevin
Hi all, I have some value under src fields as below, but it has some problems. For example, actually <1b5a.4.d576d0e8-5fbb-4739-a4c7-6dfbc1a4fd2e@avo-sv.one.com> and 1b5a.4.79406b4a-9326-41b2-94cc-2... See more...
Hi all, I have some value under src fields as below, but it has some problems. For example, actually <1b5a.4.d576d0e8-5fbb-4739-a4c7-6dfbc1a4fd2e@avo-sv.one.com> and 1b5a.4.79406b4a-9326-41b2-94cc-2626e10ea6f6@avo-sv.one.com are same & i have multiple src with same issue. I want to remove all "< >" if the string has, so there will not be duplicates. Can anyone help me on this? Thank you.
This can be handy for dumping a list of installed ES correlation searches with disabled status, description, frameworks etc.  Be sure your use has the permissions to all knowledge objects if you don'... See more...
This can be handy for dumping a list of installed ES correlation searches with disabled status, description, frameworks etc.  Be sure your use has the permissions to all knowledge objects if you don't see any you are know are present in an app context. | rest splunk_server=local count=0 /servicesNS/-/-/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rename title as search_name, eai:acl.app as app, action.correlationsearch.annotations as frameworks | table search_name, app, description, frameworks, disabled | spath input=frameworks | rename mitre_attack{} as mitre_attack, nist{} as nist, cis20{} as cis20, kill_chain_phases{} as kill_chain_phases | table app, search_name, description, disabled, cis, kill_chain_phases, nist, mitre_attack
Is there a way to test index-time operations without indexing logs? For example, is there a way I can provide a sample log file and see what the timestamp, host, sourcetype, source, and output after ... See more...
Is there a way to test index-time operations without indexing logs? For example, is there a way I can provide a sample log file and see what the timestamp, host, sourcetype, source, and output after other operations like null-queuing would be? For example, I currently use the "Add Data" section to test timestamping and line-breaking, but this doesn't show other metadata or what will be ingested after null-queuing. I also setup a quick bash command to make copies of the base log samples and have inputs continuously monitor the new files as I'm testing new sourcetypes. I feel like this is a bit inefficient. Thanks in advance for any input!
hi sorry for this question but I have difficulties to understand why a by clause with 3 conditions retrieve less events than a clause with 2 events Is the difference is on the field _time?   ... See more...
hi sorry for this question but I have difficulties to understand why a by clause with 3 conditions retrieve less events than a clause with 2 events Is the difference is on the field _time?     | stats count as PbPerf by _time toto tutu | stats count as PbPerf by _time toto      thanks
Hello All, I am trying to create chart table with the below data, I have the table sorted with Month Name from descending order(rolling 13 months) with below search, but i am looking to have month(... See more...
Hello All, I am trying to create chart table with the below data, I have the table sorted with Month Name from descending order(rolling 13 months) with below search, but i am looking to have month(rolling 13 months) sorted in descending order in x axis, i used Chart in search but with no luck, i have spent lot of time trying different ways but no luck. I have added what i have tried, sample input and desired output Request you to kindly help me here. Below is the Base search where month field is derived after several appends and  summarized by stats, it is in string format.     | table ex_p month id Value | chart values(Value) as final_value by ex_p,id |sort - id ----------------------------------------------------- | table ex_p month id Value | sort - id | chart values(Value) as final_value by ex_p,month     SAMPLE INPUT ex_p month id Value P1 Apr-22 202204 10 | 10% P2 Apr-22 202204 20 | 15% P3 Apr-22 202204 100 | 60% P4 Apr-22 202204 27 | 100% R P1 Apr-22 202204 12 | 45% R P2 Apr-22 202204 36 | 89% R P3 Apr-22 202204 16 | 30% R P4 Apr-22 202204 28 | 65% P1 Mar-22 202203 90 | 90% P2 Mar-22 202203 57 | 120% P3 Mar-22 202203 18 | 125% P4 Mar-22 202203 76 | 76% R P1 Mar-22 202203 80 | 70% R P2 Mar-22 202203 78 | 99% R P3 Mar-22 202203 97 | 85% R P4 Mar-22 202203 08 | 09% … … … … … … … … … … … … RP4 21-Apr 202104 10 | 110% Required OUTPUT ex_p Apr-22 Mar-22 … 21-Apr P1 10 | 10% 90 | 90% … … P2 20 | 15% 57 | 120% … … P3 100 | 60% 18 | 125% … … P4 27 | 100% 76 | 76% … … R P1 12 | 45% 80 | 70% … … R P2 36 | 89% 78 | 99% … … R P3 16 | 30% 97 | 85% … … R P4 28 | 65% 08 | 09% … …
This has been asked before, and the questions seems to die. So here I am with a slightly different use case/phrasing. Dearest Splunk Devs, please let me use environmental variables in my configs. ... See more...
This has been asked before, and the questions seems to die. So here I am with a slightly different use case/phrasing. Dearest Splunk Devs, please let me use environmental variables in my configs. Issue: I have several heavy forwarders collecting logs from different endpoints. My users need to know which heavy forwarder the logs passed through.  I want to add the Heavy Forwarder's hostname to the log as "collector" Current situaiton: transforms.conf   [addmeta] REGEX = . FORMAT = collector::$HOSTNAME WRITE_META = true   props.conf   [generic_single_line] TRANSFORMS-addmeta = addmeta   This results in the unfortunate log:   4/6/22 1:01:17.000 PM testing my props.conf with a simple log collector = $HOSTNAME sourcetype = generic_single_line   But what SHOULD be happening:   4/6/22 1:01:17.000 PM testing my props.conf with a simple log collector = EventCollect01.domain.com sourcetype = generic_single_line   What can I do to pull some sort of internal variable instead of hardcoding the host?  
I have 2 Splunk Queries  First Query will return the Employee ID of the Active and Retired Employees. Second Query will return the Employee ID of the retired Employees.   I want to merge both the... See more...
I have 2 Splunk Queries  First Query will return the Employee ID of the Active and Retired Employees. Second Query will return the Employee ID of the retired Employees.   I want to merge both the queries to get the result of only the Active employees.  by removing the Retired_Employee_ID from the list of Employee_Id Query1) index=employee_data | rex field=_raw <regular expression used to extract Employee_ID>offset_field=_extracted_fields_bounds | table Employee_Id Query2) index=employee_data | rex field=_raw <regular expression used to extract Retired_Employee_ID>offset_field=_extracted_fields_bounds | table Retired_Employee_ID      
 Need my SPL to count  records, for previous calendar day:
Hi , I need small help in adding new servers to dropdown list in app dashboard.  We have some default apps in splunk search head. In one of the app there is a dashboard to monitor login rates of ... See more...
Hi , I need small help in adding new servers to dropdown list in app dashboard.  We have some default apps in splunk search head. In one of the app there is a dashboard to monitor login rates of different stadiums for time range in UTC. Those stadiums are under one index X and now they have added two more stadiums under index Y.  Now we need to add those stadiums to that dashboard dropdown to view logins . How can we include these new stadiums to that dashboard.  I'm admin here this is new task as we don't have splunk developer in the team so can anyone help me from the scratch ? Thanks in advance..!:)  
Hi, I need to convert the following into a single query that uses the EVAL command in order to perform extractions. I currently have the following: index="identitynow" |spath path=action |renam... See more...
Hi, I need to convert the following into a single query that uses the EVAL command in order to perform extractions. I currently have the following: index="identitynow" |spath path=action |rename action as authentication_method, index="identitynow" |spath path=name |rename name as authentication_service,index="identitynow" |spath path=message | rename message as reason,index="identitynow" |spath path=status |rename status as action,index="identitynow" |spath path=source |rename source as src,index="identitynow" |spath path=source_host | rename source_host as src_user_id,index="identitynow" |spath path=apiUsername |rename apiUsername as user Is it possible to use the spath function with the EVAL command? Thank you so much for all your help!
HI Experts,   we have 4 physical indexers in cluster and since few days /splunk file system storage has reached to threshold on 2 out of 4 indexers. Is there any way to equally distribute the s... See more...
HI Experts,   we have 4 physical indexers in cluster and since few days /splunk file system storage has reached to threshold on 2 out of 4 indexers. Is there any way to equally distribute the storage load on all of the 4 indexers? Does data rebalancing option help here?
Hello Community, I am having issues combining results to display in a pie chart - I tried a few things such as mvappend and it's not working correctly. I have pulled a list of Domains and want to... See more...
Hello Community, I am having issues combining results to display in a pie chart - I tried a few things such as mvappend and it's not working correctly. I have pulled a list of Domains and want to display them in a pie chart. To get the list of domains and display them in a chart I am using the following:     rex field=netbiosName "^(?<Domain>[^\\\\]+)" | stats count by Domain     This works as intended, but I have a couple of results that come up as both 'domain1' and 'domain1.com' and are displayed in the pie chart. I would like to combine these results, so that the count for both 'domain1' and 'domain1.com' is added together under just 'domain1' Thanks