All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We recently setup Splunk Mobile and we want to be able to view the monitoring console in the dashboards section of the app. When we add it in the GUI on the user, we do not see it in the mobile app. ... See more...
We recently setup Splunk Mobile and we want to be able to view the monitoring console in the dashboards section of the app. When we add it in the GUI on the user, we do not see it in the mobile app. Is this not supported or is there a change we need to make somewhere? We see other apps that we add but not the monitoring console.  I imagine it would be supported since it shows up in the list of available apps.  Any help is appreciated. Thanks. 
Introduction  Splunk Phantom ingests objects from connected assets, such as your firewall, services like VirusTotal, MaxMind, and more. Many of these assets require that Splunk Phantom provide crede... See more...
Introduction  Splunk Phantom ingests objects from connected assets, such as your firewall, services like VirusTotal, MaxMind, and more. Many of these assets require that Splunk Phantom provide credentials, such as a username and password or an authentication token to connect. Splunk Phantom stores these credentials in an encrypted form in its database, but in order to use these credentials,  they must be decrypted first. The decryption keys are stored in Splunk Phantom's keystore partition.  Cautions  If you encrypt the keystore partition, an administrator with the decryption password must provide the password each time Splunk Phantom is booted or rebooted.  Encrypting the keystore partition only protects the keystore partition when Splunk Phantom is shut down. If an attacker gains access to the operating system or the hypervisor while Splunk Phantom is running, that attacker can access the decrypted keystore. Make a full backup of your Splunk Phantom deployment. See Splunk Phantom backup and restore overview  Prerequisites  SSH access to the operating system of your Splunk Phantom deployment on a user account with either root or sudo permissions. Procedure This procedure is for Splunk Phantom 4.x releases. Do this procedure during a maintenance window or other scheduled downtime.   If you are encrypting the keystore partition in a clustered Splunk Phantom deployment, you must do this procedure on each Splunk Phantom node.   WARNING: If you lose or forget the encryption passphrase, you cannot mount the Splunk Phantom keystore partition.   SSH to your Splunk Phantom deployment.   As root, or a user with sudo permissions, install the disk encryption package and any dependencies. # yum install cryptsetup-luks  Make a backup of the keystore partition. # mkdir /root/keystore# cp -p --preserve=context /opt/phantom/keystore/* /root/keystore  Unmount the keystore partition. # umount /opt/phantom/keystore  Format the keystore partition as an encrypted volume. # cryptsetup luksFormat /dev/mapper/centos-opt_phantom_keystore  Unlock the encrypted volume. # cryptsetup luksOpen /dev/mapper/centos-opt_phantom_keystore keystore  Create the filesystem on the encrypted volume. # mkfs.ext4 /dev/mapper/keystore  Edit /etc/crypttab to add this line: keystore /dev/mapper/centos-opt_phantom_keystore none luks  Edit /etc/fstab. Modify the keystore line from: /dev/mapper/centos-opt_phantom_keystore to this: /dev/mapper/keystore /opt/phantom/keystore   ext4    defaults,noexec,nosuid,nodev        1 2 Mount the encrypted volume. # mount /opt/phantom/keystore Move the backup of the keystore to the encrypted volume.  # mv /root/keystore/* /opt/phantom/keystore  Disable the Splunk Phantom boot splash screen. Edit /etc/default/grub and remove the 'rhgb' parameter from this line: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet splash vga=791"  Reboot your Splunk Phantom instance. Testing Check to make sure Splunk Phantom is decrypting credentials.  Log in to the Splunk Phantom web ui.  From the Main Menu select Apps. Choose an app that requires credentials such as a username and password or authentication token.  Select a configured asset.  From the apps’ Asset Settings tab, click Test Connectivity. Troubleshooting  If Splunk Phantom does not mount the keystore partition:  SSH into your Splunk Phantom instance as root or a user with sudo permissions. Run this command: # mount / -o remount If there are errors in either /etc/crypttab or /etc/fstab, correct them, then reboot Splunk Phantom. 
I am using the standard 'Splunk_TA_nix' deploy-app on all of my Linux agents. Now, we are starting to deploy Cortex XDR and the local /var/log/traps/traps.pmd log is extremely verbose and unnecessary... See more...
I am using the standard 'Splunk_TA_nix' deploy-app on all of my Linux agents. Now, we are starting to deploy Cortex XDR and the local /var/log/traps/traps.pmd log is extremely verbose and unnecessary to collect in Splunk; it's already being collected by the Cortex XDR console. How do I blacklist that one log-file without editing the original deploy-app? I have tried creating an additional deploy-app which specifies that folder, then blacklists that file, but it doesn't work. Maybe I have a typo (see below) but my suspicion is that there's a precedence issue, i.e., I can't modify the input stanza from the first deploy-app? MY NEW DEPLOY-APP's INPUT.CONF   [monitor:///var/log/traps/] blacklist = traps.pmd recursive=false disabled = false index = os sourcetype = syslog    
The app description states: "you can consume the data using the prebuilt dashboard panels included with the add-on." And yet, when I go to the pre-built panels page, it is empty. Where are they?
Right now I have something like this:   index=my_index sourcetype=my_sourcetype | rex field=message "- (?<User>\S+) -:" | rex field=message "- (?<MessageInfo>\S+) :" | eval Err=if(match(MessageInfo... See more...
Right now I have something like this:   index=my_index sourcetype=my_sourcetype | rex field=message "- (?<User>\S+) -:" | rex field=message "- (?<MessageInfo>\S+) :" | eval Err=if(match(MessageInfo, "(Error example 1)|(Error example 2)"), 1, 0) | eval Succ=if(match(MessageInfo, "(Success example 1)|(Success example 2)"), 1, 0) | stats sum(Err) as ErrCount, sum(Succ) as SuccCount by User | table User, ErrCount, SuccCount   So, ErrCount gets the total count of errors for each User. However, I am writing an alert, and we only want to be alerted if there have been 10 or more errors since the last success - over a 4 hour time range. So basically: 1. By User, look at the last success message that occurred in the 4 hour time range 2. If 10 or more errors occurred since the last success message, set a flag for the User - only Users with a flag set are tabled 3. Table User and the amount of errors that occurred since the final success Is this at all possible? How could I start to go about it? I am lost on how to get the last success message and then use that to get the quantity of errors since.
I have a dashboard which provides a handful of filter criteria, for example, `fieldA=A` and `fieldB=B`. One such criteria changes the application I am searching on, which does not have `fieldA`.  Is... See more...
I have a dashboard which provides a handful of filter criteria, for example, `fieldA=A` and `fieldB=B`. One such criteria changes the application I am searching on, which does not have `fieldA`.  Is there a way to conditionally set my filters such that they only apply to my search query only if `fieldA` exists in an application's logs?  
Hi, I'm trying to build an app that will pull information from a third party tool via it's API function. The information I'm getting is not event data and is only going to be pulled when called by ... See more...
Hi, I'm trying to build an app that will pull information from a third party tool via it's API function. The information I'm getting is not event data and is only going to be pulled when called by a user of the app. The API link is going to be authenticated with a service account that Splunk will store the password for. Here's where I'm getting trouble. when the users call the API, I need to pull the password to initiate the session, but it's obvs going to be encrypted and the users can't get it without the list_stored_passwords role. However If I give the users the list_stored_passwords role they can see ALL stored passwords by using the REST command. Is there a way to lock the list_stored_passwords role so it only brings back the password a specific app? If not how else could I store a password that only people who have access to the app could decrypt?
Hi, I recently took over our instance from a colleague that left and I am stuck on this error whenever I reboot the Splunk server where the TA is installed: Invalid key in stanza [MS_AAD_signins://... See more...
Hi, I recently took over our instance from a colleague that left and I am stuck on this error whenever I reboot the Splunk server where the TA is installed: Invalid key in stanza [MS_AAD_signins://AzureADSignins] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 4: max_records (value: XX). Invalid key in stanza [MS_AAD_signins://AzureADSignins] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 6: tenant_domain (value: XXXXXX). Invalid key in stanza [MS_AAD_signins://AzureADSignins] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 7: client_id (value: XXXXXXX). Invalid key in stanza [MS_AAD_signins://AzureADSignins] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 8: client_secret (value: XXXXXXX). Invalid key in stanza [MS_AAD_audit://AzureADAudit] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 18: max_records (value: XX). Invalid key in stanza [MS_AAD_audit://AzureADAudit] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 20: tenant_domain (value: XXXXXXX). Invalid key in stanza [MS_AAD_audit://AzureADAudit] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 21: client_id (value: XXXXXXXX). Invalid key in stanza [MS_AAD_audit://AzureADAudit] in /data/splunk/etc/apps/TA-MS-AAD/local/inputs.conf, line 22: client_secret (value: XXXX). Invalid key in stanza [additional_parameters] in /data/splunk/etc/apps/TA-MS-AAD/local/ta_ms_aad_settings.conf, line 2: client_id (value: XXX). Invalid key in stanza [additional_parameters] in /data/splunk/etc/apps/TA-MS-AAD/local/ta_ms_aad_settings.conf, line 3: client_secret (value: XXX). Looking at the internal error log for this TA, I get <stderr> Introspecting scheme=azure_virtual_network: File "/data/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad_declare.py", line 10, in <module>   Looking at the file, line 10 is the "import re" line Not too sure where to go from here or if the 2 are linked.  Any help would be appreciated. Thanks!
I'm a Windows guy working with Linux trying to get MAC OS events into Splunk.  We don't have many MACs where I work, but we do have some.  Does anyone have reference material on the inputs.conf for M... See more...
I'm a Windows guy working with Linux trying to get MAC OS events into Splunk.  We don't have many MACs where I work, but we do have some.  Does anyone have reference material on the inputs.conf for MAC OSs and how I get the events into Splunk?  The Splunk UF is installed, but I need to know more about what to monitor on MAC OSs.  
hi All, is it possible to set a global token? preset that all dashboards read that value? for example $env:user$ thank you Simone
Can someone explain the licensing model for InfraMon? Does 1 host = 1VM? If I'm using this tool for basic telemetry monitoring such as CPU / RAM / Disk, that could be very expensive at that ratio. An... See more...
Can someone explain the licensing model for InfraMon? Does 1 host = 1VM? If I'm using this tool for basic telemetry monitoring such as CPU / RAM / Disk, that could be very expensive at that ratio. Any guidance would be appreciated.   Thanks.
so here is my code:   import splunklib.client as client import splunklib.results as results client.connect(**connection_args) job_kwargs = {"search_mode": "realtime", "earliest_time": "rt", "lates... See more...
so here is my code:   import splunklib.client as client import splunklib.results as results client.connect(**connection_args) job_kwargs = {"search_mode": "realtime", "earliest_time": "rt", "latest_time": "rt"} for item in service.jobs.export(query=my_query, **job_kwargs): if isinstance(item, results.Message): print(item.message) else: print(item)   when I'm trying to run this code with a general query   query="search index=main"   It’s working properly. but if I’m trying with   query="search `notable` | eval rule_name=if(isnull(rule_name),source,rule_name) | eval rule_title=if(isnull(rule_title),rule_name,rule_title) | `get_urgency` | `risk_correlation` | eval rule_description=if(isnull(rule_description),source,rule_description) | eval security_domain=if(isnull(security_domain),source,security_domain)"   I get a lot of events that I cannot see in the regular search. also, I get almost every multiple times with a little change (such as dest_ip=8.8.8.8 anddest_ip=8.8.8.9) and a part of them are even identical. note when I’m trying to test it I found that I have on average 9 events in 5 min but when I’m using the real-time search I get almost 130 on average.
After DBconnect  3.5.0 installation tried to migrate the configuration settings from 3.1.4 to 3.5.0 on Splunk Enterprise 7.3.2. failed with SSL error : certificate verify failed (_ssl.c:618). Any... See more...
After DBconnect  3.5.0 installation tried to migrate the configuration settings from 3.1.4 to 3.5.0 on Splunk Enterprise 7.3.2. failed with SSL error : certificate verify failed (_ssl.c:618). Any help on this.
Is there a way to show multiple dashboards on your Splunk Homepage?
I am working on configuring the TAXXI Feeds. My Post argument is as below: collection="curated-ragw" earliest="-7d" key=$user:redacted,realm:redacted$ However, it is not working. The below error sh... See more...
I am working on configuring the TAXXI Feeds. My Post argument is as below: collection="curated-ragw" earliest="-7d" key=$user:redacted,realm:redacted$ However, it is not working. The below error shows up: TaxiiHandlerException: Exception when polling TAXII feed: Message Type: Status_Message Message ID: 8492898524872483651; In Response To: 0 Status Type: BAD_MESSAGE Message: The access to CTIX has been blocked because incorrect credentials were provided. Please contact Support team. I have checked the username and password added in the credential management is correct.  Can someone help me with how to configure this? I can only access UI to configure these feeds.
I have created a collection in app/local/collections.conf a matching lookup in app/local/transforms.conf I have 5 key fields which together for the unique key, the combination of these is also st... See more...
I have created a collection in app/local/collections.conf a matching lookup in app/local/transforms.conf I have 5 key fields which together for the unique key, the combination of these is also stored in the _key field. The data is populated from an index which is filled from a dbconnect source, and automatically updated up into to collection. All this works just fine. when I use the lookup in SPL using the five fields as input, I nicely get referenced data back. when I create this lookup as part of a data model, it also provides the extra fields in the datamodel. However if I try to use this in an automated lookup, I cannot get it to work. I have verified the correct use of the sourcetype (and also tried defining against source) I have verified the rights and tried using all on app and global level I have duplicated the full config on a csv file and this works just fine but against the kvstore the automatic lookup just wont work. illustration of the files and configs             collections.conf in app/local [my_collection] field.inputfield1 = string field.inputfield2 = string field.inputfield3 = string field.inputfield4 = string field.inputfield5 = string field.outputfield1 = string ...                           transforms.conf in app/local [my_collection_lookup] external_type = kvstore collection = my_collection fields_list = _key, inputfield1, inputfield2,inputfield3,inputfield4,inputfield5, outputfield1 ...                           props.conf in app/local [sourcetype_stanza] LOOKUP-enrich_kv = my_collection_lookup inputfield1 AS datafield1 inputfield2 AS datafield2 inputfield3 AS datafield3 inputfield4 AS datafield4 inputfield5 as datafield5 OUTPUTNEW _key as key outputfield1 ....                 any experiences/thoughts/ideas ?
Hello Fellow Splunkers, I have been looking for a solution to ingest Dell EMC Unity 500 storage logs and my research has basically told me it's not possible...I just find that hard to believe with t... See more...
Hello Fellow Splunkers, I have been looking for a solution to ingest Dell EMC Unity 500 storage logs and my research has basically told me it's not possible...I just find that hard to believe with the power of Splunk so wanted to ask you all if you've seen this ingested before and if so, how? I saw a similar post with this same question here: https://community.splunk.com/t5/Deployment-Architecture/How-to-add-the-logs-of-EMC-unity-storage-to-splunk/m-p/394811 However, it's from 2018 and doesn't appear to have a solid solution linked to it. Please reach out if you have any suggestions, solutions, or questions!
Hello, Two months ago we had the trial for the Enterprise version but now we are using the free version. Since the free version was selected we're prompted with an error, and we can't solve it. The... See more...
Hello, Two months ago we had the trial for the Enterprise version but now we are using the free version. Since the free version was selected we're prompted with an error, and we can't solve it. The error when we try to do a new search is the following: "Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK." Any ideas? It'd be nice not reinstalling the whole platform as the data stored is needed.   Thanks in advance
Hello, We want to call a REST API endpoint as the action for an alert and also wish to send some parts of the search result with this API call. Do you have any suggestions?
Hi all, I'm new to this forum. Would be really happy if you could help me with this. I am ingesting Bluecode ProxySG logs via syslog as recommended with the log format configuration provided by spl... See more...
Hi all, I'm new to this forum. Would be really happy if you could help me with this. I am ingesting Bluecode ProxySG logs via syslog as recommended with the log format configuration provided by splunk. $(date)T$(x-bluecoat-hour-utc):$(x-bluecoat-minute-utc):$(x-bluecoat-second-utc).000z $(s-computername) bluecoat - splunk_format https://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/Setup   The event time of a proxySG event is always showed with UTC +2 which is causing Splunk to not recognize the time. Can keep the format configuration and set the ProxySG to local time to avoid the UTC? Will this configuration still be working and simply not just showing +2? Regards, O.