All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that p... See more...
Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that pattern: "^>.*" is valid for the property: options.fontSize. Is that actually enabled in DS, does anyone know? In other words, can I put a selector/formatting function in (for example, formatByType) and have the fontSize selected based on whether the column is a number or text type? If so, what's the syntax for the context definition? For example, is there a way to make this work? "fontSize": ">table | frameBySeriesTypes(\"number\",\"string\") | formatByType(fontPickerConfig)" (If not, there should be!) Thanks!
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1"... See more...
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1" | fields _time field_2 field_3 For both the queries, auto extracted fields are added. ( _time, field_2, field_3). These are general questions for better understanding,  I would like to get suggestions in which scenario which usage (tstas, datamodel, root event , root search with streaming command, root search without streaming command) is preferrable? 1. |datamodel datamodelname datasetname | stats count by field_3 For Query 1, the output is pretty fast just below 10 seconds. (root search with out streaming command) For Query 2, the output is more than 100 seconds. (root search with streaming command) 2. For Query 2, tstats command is also taking more than 100 seconds and only giving results when added summariesonly=false, why is it not giving results when summariesonly=true is added? For Query 1, it works both summariesonly=false and true , and the output is pretty fast less than 2 seconds actually. So, in what scenario is it mentioned that streaming commands in root search can be added and accerlated, when in return it is querying by adding fields twice which is becoming more inefficient? eg : This is for Query 2 | datamodel datamodelname datasetname | stats count by properties.ActionType underlying query that is running : (index=* OR index=_*) index=abc sourcetype="xyz" field_1="1" _time=* DIRECTIVES(READ_SUMMARY(datamodel="datamodelname.datasetname" summariesonly="false" allow_old_summaries="false")) | fields "_time" field_2 field_3 | search _time = * | fields "_time" field_2 field_3 | stats count by properties.ActionType 3. And in general what is recommended - when a datamodel is accerlated, using either of them | datamodel or | tstats gives better performance. - when a datamodel is not accerlated, using | tstats only gives better performance.  Is this correct? 4. And when a datamodel is not accerlated, the command | datamodel pulls the data from _raw buckets, then what is the use of querying the data using datamodel instead of direct index? When the performance is same? 5. And while querying | datamodel datamodelname datasetname why is splunk by default adding ( index=* and index=_*)? It can be changed?
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same i... See more...
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same index but each has it own sourcetype. File copy location events has logs for each file but Target location has a logs which has multiple files names. Log format of filecopy location: 2024-12-18 17:02:50 , file_name="XYZ.csv",  file copy success  2024-12-18 17:02:58, file_name="ABC.zip", file copy success  2024-12-18 17:03:38, file_name="123.docx", file copy success 2024-12-18 18:06:19, file_name="143.docx", file copy success Log format of Target Location: 2024-12-18 17:30:10 <FileTransfer status="success>                                               <FileName>XYZ.csv</FileName>                                              <FileName>ABC.zip</FileName>                                              <FileName>123.docx</FileName>                                                </FileTransfer> Desired result:       File Name                  FileCopyLocation               Target Location       XYZ.csv                  2024-12-18 17:02:50          2024-12-18 17:30:10       ABC.zip                   2024-12-18 17:02:58          2024-12-18 17:30:10       123.docx                2024-12-18 17:03:38          2024-12-18 17:30:10        143.docx               2024-12-18 18:06:19            Pending   Since events are in the same index and more events I do not  want use join.
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES... See more...
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES instance hosted in Splunk Cloud. Within the cn=* field, multiplies for both IP and hostnames. We aim for host fields to be either hostname or nt_host. some of these values though are written as such: cn=192_168_1_1   I want to evaluate the existing field and output them as normal decimals when seen. I am assuming I would need an if statement keeping intact hostname values while else performing the conversion. I am not at computer right now but will update with some data and my progress thus far.   Thanks!  
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewa... See more...
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewalls are logging to splunk through searching the firewall indexes to validate they are logging. I would like to combine them, and have a 3rd column that shows the difference.  I run into problems with multisearch since I am using a lookup (via inputlookup), and another lookup where I search for firewalls by hostname, and if the hostname contains a certain naming convention it matches the hostname to a lookup file with the hostname to WorkDay_Location. FIREWALLS FROM INVENTORY - by Workday Location | inputlookup fw_asset_lookup.csv | search ComponentCategory="Firewall*" | stats count by WorkDay_Location FIREWALLS LOGGING TO SPLUNK - by Workday Location index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=host "(?<hostname_code>[a-z]+\-[a-z0-9]+)\-(?<device>[a-z]+\-[a-z0-9]+)" | lookup device_sites_master_hostname_mapping.csv hostname_code OUTPUT Workday_Location | stats dc(host) by Workday_Location | sort Workday_Location Current output: Table 1: Firewalls from Inventory Search WorkDay_Location   count Location_1                   5 Location_2                   5 Table 2: Firewalls Logging to Splunk search WorkDay_Location  count Location_1                  3 Location_2                  5 Desired output WorkDay_Location      FW_Inventory      FW_Logging      Diff Location_1                      5                                 3                            2 Location_2                      5                                 5                            0 Appreciate any help if this is possible.
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with s... See more...
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with someone else by sharing the current url, however the url does not seem to contain the information this time, and every time I share or reload it reverts to defaults. What is the solution here to share?
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried usi... See more...
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried using attributes and resource processors to add the source info, then enabled those processors in the pipelines in the agent_config.yaml. In gateway_config,yaml, I added the processors with from_attribute to read from agent's attribute. But I couldn't add additional source tags of my metric. Can anyone help here? Let me know if you need more info. I can share. Thanks, Naren
Does anyone know if there is a way to suppress the sending of alerts during a certain time interval if the result is the same as the previous trigger, and if the result changes, it should trigger reg... See more...
Does anyone know if there is a way to suppress the sending of alerts during a certain time interval if the result is the same as the previous trigger, and if the result changes, it should trigger regardless of any suppression or only trigger when there is a new event that causes it to trigger?
I'm under the impression that HEC ingestion directly to the indexers is supported natively on cloud. I wonder whether the HEC ingestion on-prem to the indexers is supported in the same way?
We have a case in which each email message is stored on the file system as a distinct file. Is there a way to ingest each file as a distinct Splunk event? I assume that UF is the right way but I m... See more...
We have a case in which each email message is stored on the file system as a distinct file. Is there a way to ingest each file as a distinct Splunk event? I assume that UF is the right way but I might be wrong. 
Hello,  I need your help with timepicker values. I'd like to be able to keep some and hide others. I would like to hide all times linked to real time : "today,..." In the predefined periods : Ye... See more...
Hello,  I need your help with timepicker values. I'd like to be able to keep some and hide others. I would like to hide all times linked to real time : "today,..." In the predefined periods : Yesterday Since the start of the week Since the beginning of the working week From the beginning of the month Year to date Yesterday Last week The previous working week The previous month The previous year Last 7 days Last 30 days Other Anytime I would to have also : Period defined by a date Date and period Advanced  
I am using Windows 10 and the Splunk Universal Forwarder version 9.4.0. When I run certain Splunk commands from an Admin Command Prompt, the command window freezes with a blinking cursor and fails to... See more...
I am using Windows 10 and the Splunk Universal Forwarder version 9.4.0. When I run certain Splunk commands from an Admin Command Prompt, the command window freezes with a blinking cursor and fails to execute. I have to use Ctrl+C to stop the command. Some commands work without issues, such as: > splunk status – which confirms that Splunk is running, and > splunk version – which displays the version number. However, other commands, like: > splunk list forward-servers or > splunk display local-index, do not return any results. Instead, the cursor just blinks indefinitely. Has anyone experienced this issue before or found a solution?
Hi at all, I have a data structure like the following:     title1 title2 title3 title4 value     and I need to group by title1 and having title4 where value (numeric field) is max. How can I ... See more...
Hi at all, I have a data structure like the following:     title1 title2 title3 title4 value     and I need to group by title1 and having title4 where value (numeric field) is max. How can I use eval in stats to have this? something like this:     | stats values(eval(title4 where value is max)) AS title4 BY title1     How can I do it? Ciao. Giuseppe
Hello,  I am just trying to do a regex to split a single field into two new fields. The original field is: alert.alias = STORE_176_RSO_AP_176_10 I need to split this out to 2 new fields. First fi... See more...
Hello,  I am just trying to do a regex to split a single field into two new fields. The original field is: alert.alias = STORE_176_RSO_AP_176_10 I need to split this out to 2 new fields. First field = STORE_176_RSO Second field = AP_176_10 I am horrific at regex and am not sure how I can pull this off.  Any help would be awesome.   Thank you for your help, Tom
Hello, I am following document: https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/ConfigureandinstallcertificatesforLogObserver?ref=hk to configure and install certificates in Splunk Enter... See more...
Hello, I am following document: https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/ConfigureandinstallcertificatesforLogObserver?ref=hk to configure and install certificates in Splunk Enterprise for Splunk Log Observer Connect but getting some error mentioned below. I have generated myFinalCert.pem as per the document mentioned above, below are the server.conf and web.conf configuration. # cat ../etc/system/local/server.conf [general] serverName = ip-xxxx.us-west-2.compute.internal pass4SymmKey = $7$IHXMpPIvtTGnxEusRYk62AjBIizAQosZq0YXtUg== [sslConfig] serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem requireClientCert = false sslPassword = $7$vboieDG2v4YFg8FbYxW8jDji6woyDylOKWLe8Ow== [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free # cat ../etc/system/local/web.conf [expose:tlPackage-scimGroup] methods = GET pattern = /identity/provisioning/v1/scim/v2/Groups/* [expose:tlPackage-scimGroups] methods = GET pattern = /identity/provisioning/v1/scim/v2/Groups [expose:tlPackage-scimUser] methods = GET,PUT,PATCH,DELETE pattern = /identity/provisioning/v1/scim/v2/Users/* [expose:tlPackage-scimUsers] methods = GET pattern = /identity/provisioning/v1/scim/v2/Users [settings] enableSplunkWebSSL = true serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem # After making changes to server.conf, I am able to restart the splunkd service but after making changes to the web.conf, restarting the splunkd service gets stuck, below are logs related to it: # ./splunk restart splunkd is not running. [FAILED] Splunk> The IT Search Engine. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main sim_metrics statsd_udp_8125_5_dec summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.3.2-d8bb32809498-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at https://127.0.0.1:8000 to be available...............................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Please let me know if I am missing some thing. Thanks
Hi, The Mimecast App gets events for most of the activity that occurs in the solution but does not give the option to get archive events. Does anybody know if they plan to add that functionality soo... See more...
Hi, The Mimecast App gets events for most of the activity that occurs in the solution but does not give the option to get archive events. Does anybody know if they plan to add that functionality soon? Just in case so I do not have to develop that part on my own. I refer to those two API calls: https://integrations.mimecast.com/documentation/endpoint-reference/logs-and-statistics/get-archive-message-view-logs/ https://integrations.mimecast.com/documentation/endpoint-reference/logs-and-statistics/get-archive-search-logs/ The rest of the things are included in the current version 5.2.0: And no, the events generated when someone reads the content of an email are not stored with the Audit events. Thanks!
Hi I need help I have just updated my indexer cluster composed of 4 windows 2022 servers, to the new version of Splunk 9.4.0. As always I follow the update procedure, but this time one of my 4 serv... See more...
Hi I need help I have just updated my indexer cluster composed of 4 windows 2022 servers, to the new version of Splunk 9.4.0. As always I follow the update procedure, but this time one of my 4 servers refuses to update, it makes a rollback each time. I check the installations failed logs and noticed that the KVstrore was failing to update! Can anyone help me fix this problem? Thanks for your help.
Hello, We have a lookup csv file: 1 million records (data1); and a kvstore: 3 million records (data2). We need to compare a street address in data2 with a fuzzy match of the street address in data1 ... See more...
Hello, We have a lookup csv file: 1 million records (data1); and a kvstore: 3 million records (data2). We need to compare a street address in data2 with a fuzzy match of the street address in data1 - the bold red text below -returning the property owner. Ex" data2 street address:    123 main street  data1 street address:    123 main street apt 13 We ran a regular lookup command and this took well over 7 hours. We have tried creating a sub-address (data1a) removing the apt/unit numbers, but still a 7 hour search. Plus if there is more than one apt/unit at the address, there might be more than one property owner. This is why a fuzzy-type compare is what we are looking for. Hope my explanation is clear. Ask if not. Thanks and God bless, Genesius (Merry Christmas and Happy Holidays)
I have a client who wants to share the Readme file in their app with end users so that they can reference this in the UI. Seems reasonable and prevents them having to duplicate content into a view. O... See more...
I have a client who wants to share the Readme file in their app with end users so that they can reference this in the UI. Seems reasonable and prevents them having to duplicate content into a view. Otherwise the readme file is only available to admins who have CLI access. I have tried using the REST endpoint to locate the file, I have checked that the metadata allows read, it is just the path and actual capability I am unclear on. https://<splunk-instance>/en-GB/manager/<redacted>/apps/README.md Thanks  
Hi First of all, I'm a total beginner to Splunk. I just started my free trial of Splunk Cloud and want to install the UF on my MacBook. I don't know how to install the credential file, splunkclouduf... See more...
Hi First of all, I'm a total beginner to Splunk. I just started my free trial of Splunk Cloud and want to install the UF on my MacBook. I don't know how to install the credential file, splunkclouduf.spl. I have unpacked that file but in what directory should I move them to?  You can also see the directory of SplunkForwarder.