All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

By now found out that creating the user `nobody` is not allowed by Splunk, it throws the following error when creating such a user: Create user: The user="nobody" is reserved by the splunk system
Hi, I would like to have a xml panels code to be passed from Javascript to Splunk XML code dynamically. For instance, by default, the XML dashboard has 2 panels. After that, when javascript file is... See more...
Hi, I would like to have a xml panels code to be passed from Javascript to Splunk XML code dynamically. For instance, by default, the XML dashboard has 2 panels. After that, when javascript file is executed, the panels will be added dynamically accordingly to the conditions of the user to have 3 or more panels based on conditions. I have tried to passed the XML panel code from Javascript as a token to XML code but the Dashboard does not display the panel in Dashboard. Following is a sample code I have. XML <dashboard> ....... <row>        $table1$ </row> ........ </dashboard>   Javascript const panel = '<panel><table><search><query>index=_internal  | stats count by sourcetype</query><earliest>-24h@h</earliest><latest>now</latest><sampleRatio>1</sampleRatio></search><option name="count">20</option><option name="dataOverlayMode">none</option><option name="drilldown">none</option><option name="percentagesRow">false</option><option name="rowNumbers">false</option><option name="totalsRow">false</option><option name="wrap">true</option></table></panel>'         var parser = new DOMParser();         var xmlDoc = parser.parseFromString(panel, "text/xml"); //important to use "text/xml"         tokens.set("table1", xmlDoc.documentElement);         submitted_tokens.set(tokens.toJSON());   May I know how to solve this, please? Thank you in advance.  
Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 140: is_template (value: 0).  Invalid key in stanza [WSUS_vUpdateInstall... See more...
Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 140: is_template (value: 0).  Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 141: use_json_output (value: 0). can anyone please suggest what valid keynames can I use in place of these two? also, this problem has occured after migration and upgradation  
Hi, I have the same issue but its not working for me.. I first created the lookup and save the search as a report, and then i need to edit my query to append ONLY new values. The current query d... See more...
Hi, I have the same issue but its not working for me.. I first created the lookup and save the search as a report, and then i need to edit my query to append ONLY new values. The current query does not push values at all.     index="rapid7_threat_intelligence" type="Domain" |table _time, source, type, value |outputlookup DOMAIN_IOC_ACTIVE.csv append=true | append [ | inputlookup append=true DOMAIN_IOC_ACTIVE.csv] | dedup value
Another sign something is not right - when I construct the database query as described in the Snowflake Integration blog post the resulting SQL string is completely malformed.   SQL string from the... See more...
Another sign something is not right - when I construct the database query as described in the Snowflake Integration blog post the resulting SQL string is completely malformed.   SQL string from the integration post:  SQL string results when I select the same options:  I am able to construct the SQL string using the selection options, however each selection takes another 4-5min to load. 
I need a query to get the new created use cases in the last 7 days and another query to get the fine tuned use cases for the last 7 days.
Hello all !  Can anyone help me in editing the below SPL so it can only list the _key - value paris for the entities ?  | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/enti... See more...
Hello all !  Can anyone help me in editing the below SPL so it can only list the _key - value paris for the entities ?  | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text fields="_key,title,identifier,informational,identifying_name" | eval value=spath(value,"{}") | mvexpand value | eval entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name"), entity_aliases=mvzip(spath(value, "identifier.fields{asd}"),spath(value, "identifier.values{}"),"="), entity_info=mvzip(spath(value, "informational.fields{}"),spath(value, "informational.values{}"),"=") This is a solution i found online but its rather complicated for my SPL skills at the moment....  Thank you in advance ! 
Hi @Rao_KGY , the code of your dashboard is correct (except the difference I already described), I don't see any reason because two equal searches give different results. Try to manually copy one o... See more...
Hi @Rao_KGY , the code of your dashboard is correct (except the difference I already described), I don't see any reason because two equal searches give different results. Try to manually copy one on the other, maybe there's some char that we don't see. Ciao. Giuseppe
I just stumbled on this and thought I'd add a few other notes on this... with props.conf in the stanza "sourcetype" where this props.conf creates a field called "action". Just to confirm, how is ... See more...
I just stumbled on this and thought I'd add a few other notes on this... with props.conf in the stanza "sourcetype" where this props.conf creates a field called "action". Just to confirm, how is the field being created? I'm assuming you mean a search time field as opposed to an index time field. Skimming the #Mimecast for Splunk app it looks like there are field aliases, and eval statements for different source types around the `action` field which are both search time... but I could be missing something. If you were instead referring to an index time transformation, not only is precedence order different, but also reingestion of data would need to happen before things take effect. Speaking of precedence order, probably time to mention that search time attributes use user / app precedence order as documented: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles#Precedence_within_app_or_user_context  Some effects of this is 1) your app needs to be lexographically after the app you're trying to override (precedence by name of other apps is reverse lexographic order as opposed to forward lexographic order) 2) your app needs to export the corresponding settings 3) even with all that done, your settings from other apps will lose if searches are being launched from within the mimecast app because the current app's settings gets highest precedence for the same stanza. (may or may not be an issue, I'm not familiar enough with the app to say, but it is a potential edge case).  Here's where we need to address something else:  Splunk Cloud doesn't allow using local folder Splunk Cloud doesn't let you upload an app with a local folder, however, Calculated fields, and field aliases are both editable via the UI, so you could actually create a local override within the context of the original TA itself via the UI and those would win. (since merging between default and local per app happens first with user/app context resolution order). No additional app needed necessarily (that gets into a long term management discussion).  @richgalloway already provided an alternative solution since within props.conf, there is an additional merging between stanzas for sourcetypes, hosts, and sources... But if course need to be careful with those since you could affect other sourcetypes too. From the props.conf.spec: **[<spec>] stanza precedence:** For settings that are specified in multiple categories of matching [<spec>] stanzas, [host::<host>] settings override [<sourcetype>] settings. Additionally, [source::<source>] settings override both [host::<host>] and [<sourcetype>] settings. In either case once settings are in place on the search head, they need to be replicated to your indexers as part of the knowledge bundle before they can take effect during a search... So if you're already over the 3GB limit there need to spend some time trimming the bundle size.  Seeing resolved search time precedence can be done per stanza with the properties rest endpoint on the search head, and/or the btool command. (make sure to specify the appropriate `--app` and `--user` context for correct resolution order of search time values).  (And before you say but btool is an enterprise only command.. I may have brought it to Cloud as a SPL command along with a knowledge bundle utility in Admins Little Helper for Splunk ... my officially unsupported but I think its useful side project Check it out on Splunkbase: https://splunkbase.splunk.com/app/6368 </shameless plug> )  Hope these notes help you and others in the future.
We have enabled Microsoft SAML for Splunk and out splunkd.log seems to be flooded with warnings like this: WARN UserManagerPro [7456 SchedulerThread] - AQR and authentication extensions not supporte... See more...
We have enabled Microsoft SAML for Splunk and out splunkd.log seems to be flooded with warnings like this: WARN UserManagerPro [7456 SchedulerThread] - AQR and authentication extensions not supported. Or authentication extensions is supported but is used for tokens only. user=nobody information not found in cache Found a few threads on AQR: - https://community.splunk.com/t5/Monitoring-Splunk/What-is-quot-AQR-quot-and-why-is-it-throwing-warning-messages-in/m-p/347016/highlight/true#M3064 - https://community.splunk.com/t5/Security/How-do-you-resolve-splunk-log-error-messages-after-switching/m-p/354479/highlight/true#M8882 Also the documentation on authentication.conf does not help me much. It seems the only way is to create a low level user (same as mentioned in the error) to suppress the error, which seems doable but I doubt this is the best way and unsure of side effects? Does any of you know more? 
@skalliger wrote: How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) a... See more...
@skalliger wrote: How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) as well as recommended best practice encryption settings.  Nice work @skalliger . Is this app publically available? I would be interested in this functionality.
Hey @gcusello  seems like there is some confusion. My issue is still stand UNRESOLVED & trying to get resolution.  Thanks
You should look at the _internal index size before changing it   | dbinspect index=_internal | stats sum(sizeOnDiskMB)    In our setup with 30 days of internal index, size of file is 680GB.  So d... See more...
You should look at the _internal index size before changing it   | dbinspect index=_internal | stats sum(sizeOnDiskMB)    In our setup with 30 days of internal index, size of file is 680GB.  So double the time, double this file. As you see above, I posted a dashboard uses _telemetry index instead to see index uses.  Our telemetry index is 215 MB of size.  We have change  _telemetry to 20 years, since it does not take up much space:   [_telemetry] # This is 20 years to get longer historical license useage. frozenTimePeriodInSecs = 788400000    
Hi @Rao_KGY , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community (even if your answer). Ciao and happy splunking Giu... See more...
Hi @Rao_KGY , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community (even if your answer). Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
We are currently looking to migrate our Standalone Splunk Enterprise server from the existing ec2 to a new ec2. The new server will be based on AWS Marketplace AMI for version 9.0.8: https://aws.amaz... See more...
We are currently looking to migrate our Standalone Splunk Enterprise server from the existing ec2 to a new ec2. The new server will be based on AWS Marketplace AMI for version 9.0.8: https://aws.amazon.com/marketplace/pp/prodview-l6oos72bsyaks?sr=0-1&ref_=beagle&applicationId=AWSMPContessa Reason for migration: we want to use the Marketplace AMI while upgrading the Splunk Version Since this migration involves going from version 8 to version 9, just copying over the apps(they contain the indexes) hasn't given us the result we wanted in our dev/test environment. We end up with no search UI loaded when the search app is copied over from the previous version 8 server to the version 9 server.  Has anyone else migrated their server this way i.e. jumping versions while migrating to the new server? What would the community recommend in terms of a scenario that we currently have? Would an in place upgrade to version 9 and then copy over to the new server be a better option/recommended?       
Hello Sirs, I would like to know the most useful Splunk App that can be suitable for Linux Auditd events. I have Linux devices such as Mangement Servers, DNS, HTTP Servers, Firewall, etc. These logs... See more...
Hello Sirs, I would like to know the most useful Splunk App that can be suitable for Linux Auditd events. I have Linux devices such as Mangement Servers, DNS, HTTP Servers, Firewall, etc. These logs carried by both Syslog Forwarder and Heavy forwarders.  Please suggest how to monitor the audit logs by which Splunk App? Thanks a bunch.
I have created indexes on web interface of splunk cloud and it was not specific to any app and since i have created those indexes i think we can rule out the possibility of them being system-defined.
Hi All,  I have searched the community threads for posts similar to this, but none have quite addressed the issue I am seeing.  I have Splunk Cloud 9.1.2 and would like to retrieve logging from Sno... See more...
Hi All,  I have searched the community threads for posts similar to this, but none have quite addressed the issue I am seeing.  I have Splunk Cloud 9.1.2 and would like to retrieve logging from Snowflake.  Following this snowflake integration blog I have installed the Splunk DB Connect app (3.15.0) and the Splunk DXB Add-on for Snowflake JDBC (1.2.1). (using the self-service app install process on Victoria experience) When creating the identity in Splunk (matching the user created in Snowflake) this works fine, however creating the connection fails to validate (after trying for approximately 4-5 minutes) and gives me the following non-descript error:  connection string:      jdbc‌‌‌‌:snowflake://<account>.<region>.snowflakecomputing.com/?user=SPLUNK_USER&db=SNOWFLAKE&role=SPLUNK_ROLE&application=SPLUNK     Output: In the logs I can see slightly more:    2024-02-26 00:59:26.501 +0000 [dw-868 - GET /api/connections/SnowflakeUser/status] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=validation connection_name=SnowflakeUser stanza_name= state=error sql='SELECT current_date()' message='JDBC driver: query has been canceled by user.'   This appears to hit some sort of timeout for the JDBC driver. The other thing I can see is the stanza appears to be blank in this result. However the default Snowflake stanza in the DB connect app matches the stanza created in the Snowflake blogpost.  Any troubleshooting help would be much appreciated. 
A unique feature of Ingest Actions is that they apply to cooked data.  That means they can be installed on indexers and still work on events that come from an HF.
Do you need to deploy to indexers (as well) ? when there is an intermediary HF ? All my config is on a HF