All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are not clear whether setting TRUNCATE to a certain value guarantees that the event won't exceed this size in bytes. If not, can we specify the maximum length of an event somewhere?
How can I send alert result(s) to a dashboard input and then email dashboard results? Please let me know if anybody has worked on this before. thank you!
Hi, I am trying to use my Controller but I found that it is on PAUSE status,  System Status xxxxxxxxxx2020021306300412.saas.appdynamics.com Current Status  PAUSE since November 21 @ 12... See more...
Hi, I am trying to use my Controller but I found that it is on PAUSE status,  System Status xxxxxxxxxx2020021306300412.saas.appdynamics.com Current Status  PAUSE since November 21 @ 12:00 AM Uptime 100 %   Do you know how can it be activated? Thanks. Best Regards. Hugo Torres    
What is the root cause of the message preventing saving a search: Error in 'SearchParser': The search specifies a macro.. This error started appearing after a migration from an old SHC ... See more...
What is the root cause of the message preventing saving a search: Error in 'SearchParser': The search specifies a macro.. This error started appearing after a migration from an old SHC to a new SHC. The resolution was to move the macro to the same app as the search, even though it was set to Global sharing, but that doesn't explain the root cause. The error returns when the macro is moved back to the original app.
Hello I have some syslog data collected and forwarded to a custom path: /var/log/remote/2020/<month>/messages/<filename> This data, for most logs got the correct sourcetype = syslog ... See more...
Hello I have some syslog data collected and forwarded to a custom path: /var/log/remote/2020/<month>/messages/<filename> This data, for most logs got the correct sourcetype = syslog inputs.conf: [monitor:///var/log/remote/.../messages] whitelist=(archive|\_messages\.log|_messages\.log\-) blacklist=(\.bz2$) index=nix_os sourcetype = syslog disabled = 0 recursive=true crcSalt=SOURCE1 props.conf [source::.../var/log/remote/.../messages*] sourcetype = syslog I have unfortunately seen an issue where if the file is below a certain size it gets the filename set as the sourcetype filename: hostname.env.ext.company.com_messages.log path to filename: /var/log/remote/2020/02/env/messages/hostname.env.ext.company.com_messages.log sourcetype set as: hostname.env.ext.company.com_messages Why would the sourcetype get created as the filename? Thanks for the help!
Hello, Did anyone integrate CyberX with Splunk ? If so what did you have to configure or what info you provided to CyberX to get it to work ?I I checked the CyberX app for more details but i could... See more...
Hello, Did anyone integrate CyberX with Splunk ? If so what did you have to configure or what info you provided to CyberX to get it to work ?I I checked the CyberX app for more details but i could not find any thing related to how they are sending the data to Splunk ( Via syslog , UF , API....?), Thanks
I am using the below query and I was able to not get the results which had messages like "Optional.of(The following items are not available for order at this time)" but I found one of the message sti... See more...
I am using the below query and I was able to not get the results which had messages like "Optional.of(The following items are not available for order at this time)" but I found one of the message still appearing "Optional.of(Items quantity is over the maximum quantity)". Not sure if this has anything to do with the regex REJECTED sourcetype="pos-generic:prod" partner_account_name="Level Up" | regex message != "item" | table merchantId, orderId, message | stats count by merchantId, message
I have a query that loads a string into a token depending on the results. If one of the three results come out I want to make part of the token bold. EG: - This user has level 1 permission - T... See more...
I have a query that loads a string into a token depending on the results. If one of the three results come out I want to make part of the token bold. EG: - This user has level 1 permission - This user has level 2 permission - This user have NO PERMISSIONS I have the query outing a value like this: This users has &ltstrong&gtNO PERMISSIONS&ltstrong&gt but it's obviously converting the &lt&gt into it's html code and spitting it out as it is seen not as a html tag.
Hi, Can anybody helpme to get some use cases for darktrace. Right now I am looking only for score value.
Hi All, I am new to Splunk. I have few windows services in our environment. Sometime those services get hung or stopped automatically. I wanted to use Splunk to get notification/alerts whenever ... See more...
Hi All, I am new to Splunk. I have few windows services in our environment. Sometime those services get hung or stopped automatically. I wanted to use Splunk to get notification/alerts whenever service goes down or hung. If somebody can share any steps that would really appreciated. Thanks in Advance!
Able to connect to Azure hub using shared key and event hub name in inputs. I am not seeing any logs from the eventhub in splunk. Every 30 seconds (input interval) I get the logs below when using t... See more...
Able to connect to Azure hub using shared key and event hub name in inputs. I am not seeing any logs from the eventhub in splunk. Every 30 seconds (input interval) I get the logs below when using the search: index=internal sourcetype=ta:ms:aad:log debug _Splunk . Seems like there is no data in the event hub. The key I am using has the listen permission. When looking at the hub in Azure, it seems as if logs are being sent to the hub. 2020-02-19 09:41:57,341 DEBUG pid=52756 tid=ThreadPoolExecutor-0_3 file=base_modinput.py:log_debug:286 | Splunk saving check point. Hub name: hubname, partition_id: 4, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_4, last offset: -1 2020-02-19 09:41:52,417 DEBUG pid=52756 tid=ThreadPoolExecutor-0_3 file=base_modinput.py:log_debug:286 | Splunk getting Event Hub events. Hub name: hubname, partition_id: 4, event data type: None, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_4, last offset: -1 2020-02-19 09:41:52,412 DEBUG pid=52756 tid=ThreadPoolExecutor-0_2 file=base_modinput.py:log_debug:286 | Splunk saving check point. Hub name: hubname, partition_id: 2, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_2, last offset: -1 2020-02-19 09:41:52,407 DEBUG pid=52756 tid=ThreadPoolExecutor-0_1 file=base_modinput.py:log_debug:286 | Splunk saving check point. Hub name: hubname, partition_id: 1, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_1, st offset: -1 2020-02-19 09:41:52,402 DEBUG pid=52756 tid=ThreadPoolExecutor-0_0 file=base_modinput.py:log_debug:286 | Splunk saving check point. Hub name: hubname, partition_id: 0, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_0, last offset: -1 2020-02-19 09:41:52,396 DEBUG pid=52756 tid=ThreadPoolExecutor-0_3 file=base_modinput.py:log_debug:286 | Splunk saving check point. Hub name: hubname, partition_id: 3, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_3, last offset: -1 2020-02-19 09:41:47,206 DEBUG pid=52756 tid=ThreadPoolExecutor-0_3 file=base_modinput.py:log_debug:286 | Splunk getting Event Hub events. Hub name: hubname, partition_id: 3, event data type: None, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_3, last offset: -1 2020-02-19 09:41:47,197 DEBUG pid=52756 tid=ThreadPoolExecutor-0_2 file=base_modinput.py:log_debug:286 | Splunk getting Event Hub events. Hub name: hubname, partition_id: 2, event data type: None, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_2, last offset: -1 2020-02-19 09:41:47,087 DEBUG pid=52756 tid=ThreadPoolExecutor-0_1 file=base_modinput.py:log_debug:286 | Splunk getting Event Hub events. Hub name: hubname, partition_id: 1, event data type: None, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_1, last offset: -1 2020-02-19 09:41:46,935 DEBUG pid=52756 tid=ThreadPoolExecutor-0_0 file=base_modinput.py:log_debug:286 | Splunk getting Event Hub events. Hub name: hubname, partition_id: 0, event data type: None, checkpoint key: event_hub_sequence_number_Azure_Splunk_Audit_login_hubname_0, last offset: -1 2020-02-19 09:41:46,913 DEBUG pid=52756 tid=MainThread file=base_modinput.py:log_debug:286 | Splunk partition IDs for hub hubname: [u'0', u'1', u'2', u'3', u'4'] 2020-02-19 09:41:45,801 DEBUG pid=52756 tid=MainThread file=base_modinput.py:log_debug:286 | Splunk Getting proxy server.
Hello, We created a custom alert action as per documentation and try to trigger it. We get the following errors: 2/19/20 4:01:42.547 PM 02-19-2020 16:01:42.547 +0100 ERROR SearchScheduler ... See more...
Hello, We created a custom alert action as per documentation and try to trigger it. We get the following errors: 2/19/20 4:01:42.547 PM 02-19-2020 16:01:42.547 +0100 ERROR SearchScheduler - Error in 'sendalert' command: Alert action script for action "splunk2alc" not found., search='sendalert splunk2alc results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__d038423__mlbso__RMD5782cf4a2b848fa26_at_1582124460_1760/results.csv.gz" results_link="https://splunk-ml.zone1.mo.sap.corp:443/app/mlbso/@go?sid=scheduler__d038423__mlbso__RMD5782cf4a2b848fa26_at_1582124460_1760"' host = mo-7ee963859.zone1.mo.sap.corpsource = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 2/19/20 4:01:42.546 PM 02-19-2020 16:01:42.546 +0100 ERROR sendmodalert - Error in 'sendalert' command: Alert action script for action "splunk2alc" not found. host = mo-7ee963859.zone1.mo.sap.corpsource = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 2/19/20 4:01:42.546 PM 02-19-2020 16:01:42.546 +0100 ERROR sendmodalert - action=splunk2alc - Failed to find alert.execute.cmd "python". host = mo-7ee963859.zone1.mo.sap.corpsource = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 2/19/20 4:01:42.544 PM 02-19-2020 16:01:42.544 +0100 INFO sendmodalert - Invoking modular alert action=splunk2alc for search="Crash Dump Alert ALC - AlertAction" sid="scheduler__d038423__mlbso__RMD5782cf4a2b848fa26_at_1582124460_1760" in app="mlbso" owner="d038423" type="saved" host = mo-7ee963859.zone1.mo.sap.corpsource = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 2/19/20 4:01:38.316 PM 02-19-2020 16:01:38.316 +0100 DEBUG sendmodalert - action=alert_manager - Token value action.splunk2alc=1 Our alect_actions.conf looks as follows: [splunk2alc] is_custom = 1 disabled = 0 label = Splunk2ALC description = Send Alert to Alc track_alert = 1 ttl = 600 maxtime = 5m icon_path = alert_manager_icon.png payload_format = xml alert.execute.cmd = python alert.execute.cmd.arg.0 = /opt/splunk/etc/apps/mlbso/bin/splunk2alc.py under the alert.execute.cmd we have tried already quite some combinations, like: $SPLUNK_HOME$/bin/python $SPLUNK_HOME/bin/python /opt/splunk/bin/python All throw same error. Any ideas? Kind Regards, Kamil
I have been dumped with events what appears to be memory info. memTotalMB memFreeMB memUsedMB memFreePct memUsedPct pgPageOut swapUsedPct pgSwapOut cSwitches interrupts forks ... See more...
I have been dumped with events what appears to be memory info. memTotalMB memFreeMB memUsedMB memFreePct memUsedPct pgPageOut swapUsedPct pgSwapOut cSwitches interrupts forks processes threads loadAvg1mi waitThreads interrupts_PS pgPageIn_PS pgPageOut_PS 92101 66926 7175 77.6 21.4 3497702952 3.6 909526 998772788 4232481396 16909785 302 1012 4.07 0.00 7876.48 341.04 41.79 I am supposed to display it in a tabular format like memTotalMB, memFreeMB etc... as the headers and 9201 , 66926 etc.. as their values . Could anyone help me with the query please ?
Hi we are about to move from single install to cluster Install on 3 machines (1 search head and 3 indexers) and we are getting SSD. As SSD is expensive, do we need to add on RAID as well? Th... See more...
Hi we are about to move from single install to cluster Install on 3 machines (1 search head and 3 indexers) and we are getting SSD. As SSD is expensive, do we need to add on RAID as well? Thanks in Advance Rob
Hello, I want create a pattern for similar error message without discarding all the events.. Let's say, I have events like: error occurred from ui correlationId; abcd1234 error occurred fro... See more...
Hello, I want create a pattern for similar error message without discarding all the events.. Let's say, I have events like: error occurred from ui correlationId; abcd1234 error occurred from ui correlationId; abcd2345 error occurred from ui correlationId; abcd4536 error occurred from ui correlationId; abcd6475 Like that it has 100 errors, when I'm trying to count it shows 100 different errors but in this case it is just single error.. Here i want to do like 1. error occurred from ui correlationId; xxxx-yyyy capture remaining message 2. Count the total similar events as a single error 3. Any better solution to capture different errors to take the action immediately it will be very useful in our production...
Need to know how to connect to Power BI? We can use Splunk ODBC but that is not working with latest version? Can anybody help?
Hi, I need to lookup some values from a lookup with an id, and I have multiple values per id with more coming in from time to time. Entries are typically appended to the bottom of a lookup. The de... See more...
Hi, I need to lookup some values from a lookup with an id, and I have multiple values per id with more coming in from time to time. Entries are typically appended to the bottom of a lookup. The default behavior which uses the first match from the top is not what I need because I'm only interested in the latest entry per id though. It would be nice to be able to perform the lookup from bottom to top, or prepend the entry. Here's what I came up with so far: a) I could re-sort the lookup after inserting a new entry, effectively prepending the new entry b) I could re-write the entire lookup on adding an entry, something like | eval new lookup entry | inputlookup append=t lookup | outputlookup lookup which would also prepend it c) I could use max_matches=100000 on my lookup definition and return a multivalue field, then get the last entry with | eval field = mvindex(field, -1) , effectively doing the lookup from bottom to top d) I could furthermore work with a time based lookup, using some date in the past, to reduce the number of matches Each of these has at least one downside though. a) is ugly: I need to handle the additional step, depending on how I create the entry this is more or less of a hassle. b) raises performance concerns: I'm moving around a potentially large amoung of lookup content. c) is even uglier, because multivalue with its limits and quirks, and d) means I need to know a sensible value for that date. Any other ideas?
I want to test throughput on a Splunk setup, but I will use a Dev 10GB license, but the traffic will be nearer 6-7TB per day. I know this will go over the license limit, but I only need to test fo... See more...
I want to test throughput on a Splunk setup, but I will use a Dev 10GB license, but the traffic will be nearer 6-7TB per day. I know this will go over the license limit, but I only need to test for around 10 days. I have 4 dev Licenses. Will Splunk continue to work if we just use the 10GB licenses, adding a new one every 5 days? This is a proof of concept of the hardware? We need Splunk to continue working with all its functionality during the 10 days. Any information on if this will work is very welcome. Thanks in advance
The following is a section of an larger JSON data source digested into our Splunk instance: "identities": [{"issuerAssignedId": "bob.smith@gmail.com", "issuer": "domain.onmicrosoft.com", "signInTy... See more...
The following is a section of an larger JSON data source digested into our Splunk instance: "identities": [{"issuerAssignedId": "bob.smith@gmail.com", "issuer": "domain.onmicrosoft.com", "signInType": "emailAddress"}, {"issuerAssignedId": "0023587453958742158@domain.onmicrosoft.com", "issuer": "domain.onmicrosoft.com", "signInType": "userPrincipalName"}] The problem is this data ends up in three multi-valued fields: identities{}.issuer identities{}.issuerAssignedId identities{}.signInType I need to extract the "identities{}.issuerAssignedId" into their own fields as separate identities. Is this something that can be done at search time, or do I need to add a transform somewhere? If a transform is needed, what could that look like? Thank you!
Is it possible to use multiple wildcards in the host:: stanza in the props.conf file? [host::svr-*-blah-*] TRANSFORMS-remove = remove_stuff So we are trying to remove stuff from multiple hos... See more...
Is it possible to use multiple wildcards in the host:: stanza in the props.conf file? [host::svr-*-blah-*] TRANSFORMS-remove = remove_stuff So we are trying to remove stuff from multiple hosts in different geographical locations that have very similar names svr-us-blah-01 svr-us-blah-02 svr-us-blah-03 svr-eur-blah-01 svr-eur-blah-02 svr-eur-blah-03 svr-pac-blah-01 svr-pac-blah-02 svr-pac-blah-03 Each host will collect very similar logs and then forward the logs to Splunk, but we want to dump the noise, so I was hoping that I could just use the [host::svr--blah-] stanza to apply the same props/transforms to each host for dumping the noise. Will that work? thanks ed