All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Our objective is to integrate OpenTelemetry into a new project and establish a connection with Splunk. We are specifically interested in initiating the transmission of OpenTelemetry (otel) data to Sp... See more...
Our objective is to integrate OpenTelemetry into a new project and establish a connection with Splunk. We are specifically interested in initiating the transmission of OpenTelemetry (otel) data to Splunk. OpenTelemetry is capable of generating traces, metrics, and logging data tailored for services. Currently, our focus is directed towards collecting telemetry data for a single service stack. However, if this proves successful, we are open to expanding and incorporating additional services in the future. To facilitate this integration, we are utilizing the OpenTelemetry Collector, a crucial component of the OpenTelemetry project and a freely available open-source tool. Although Splunk offers its version, we are presently not utilizing it. We seek confirmation that there are no associated costs for using the OpenTelemetry Collector, considering its contribution to OpenTelemetry, where vendors extend the functionality. Furthermore, our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. As a Splunk Administrator, I am interested in understanding how to configure and onboard OpenTelemetry logs into Splunk. However, we are seeking clarification on potential costs and efforts associated with this initiative. Is it a separate subscription or something similar, as I currently lack information on this matter? Kindly assist in checking and providing an update.
Hi All, Our Splunk infrastructure, encompassing the Search Head, Cluster Master, Indexers, and License Master, is situated in the Cloud and managed by Splunk Support. Recently, there was a request ... See more...
Hi All, Our Splunk infrastructure, encompassing the Search Head, Cluster Master, Indexers, and License Master, is situated in the Cloud and managed by Splunk Support. Recently, there was a request from one of our application teams to integrate and ingest MongoDB Atlas (Host & Audit) logs into Splunk. Following the provided documentation, the application team attempted to install the Splunk OpenTelemetry (otel) connector on a Linux VM for a Proof of Concept (POC). In the process, they requested the generation of a token, which I fulfilled by generating one from our Splunk Cloud Search head. Unfortunately, the attempted integration did not yield the expected results. I am now seeking clarification on whether the token generated from the Splunk Search head is adequate, or if there is a need to generate an Organizational access token. If the latter is necessary, I would appreciate guidance on where and how to generate it. As the administrator of our Splunk Cloud instances, I am curious about the role of Splunk OpenTelemetry and whether it is included with Splunk Cloud. We receive multiple requests from users wanting to send OTEL logs into Splunk. If Splunk OpenTelemetry is indeed included, I would appreciate guidance on generating the organizational token and where this process should take place.   https://docs.splunk.com/observability/en/gdi/opentelemetry/components/mongodb-atlas-receiver.html https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-linux/install-linux.html#otel-install-linux https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.html#admin-org-tokens   As I examine the documentation, it explicitly mentions Splunk Observability. I seek confirmation that I am following the correct procedure. The user attempted the installation using the same base64-encoded access token, but unfortunately, the result was once again unsuccessful. Additionally, the user has confirmed that there is internet access from the VM. At this juncture, we require guidance on how to generate an "organizational access token" to facilitate the integration process.   Can anyone kindly check and guide me on this please.    
Hi Team,  It seems many of the Machine Agents (around 200) are not associated with any applications. What action we need to to do for this and why machine agents are not mapped with application ? FY... See more...
Hi Team,  It seems many of the Machine Agents (around 200) are not associated with any applications. What action we need to to do for this and why machine agents are not mapped with application ? FYI -We are doing end to end monitoring by AppDynamics.  Thanks!
Hi, are you telling me to write dedup command again. pls check the screenshot below:  
Hi All, Our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. We are currently in the process of i... See more...
Hi All, Our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. We are currently in the process of integrating the ZenGRC tool with Splunk. On the ZenGRC tool side, there is a Splunk connector. I have created an account using the Splunk authentication method, with admin privileges. Following the documentation, when I attempted to connect to Splunk via the Connectors section in ZenGRC, I encountered an error message: "Failed to Connect: Unknown error." https://reciprocitylabs.atlassian.net/wiki/spaces/ZenGRCOnboardingGuide/pages/562331727/Splunk+Connector    For reference, the ZenGRC documentation providing information on the Splunk Connector can be found above. When configuring the ZenGRC end, three pieces of information are required: Splunk Instance API URL: https://[yourdomain].splunkcloud.com:8089 UserName/Email: xxx Password: yyy   Upon attempting to connect, the process fails. Additionally, I have whitelisted the IPs as indicated in the confluence documentation. Kindly provide guidance on resolving this issue. IP Whitelisting - ZenGRC Wiki - Confluence (atlassian.net)      
Hello I'm using Splunk cloud and I have a user that wants to export search results that contains 277,500 events He is getting TO since the file is too large. Is there a way to export the file with... See more...
Hello I'm using Splunk cloud and I have a user that wants to export search results that contains 277,500 events He is getting TO since the file is too large. Is there a way to export the file without change the limitation ?  I cannot run curl command since we are using saml authentication    Thanks
By now found out that creating the user `nobody` is not allowed by Splunk, it throws the following error when creating such a user: Create user: The user="nobody" is reserved by the splunk system
Hi, I would like to have a xml panels code to be passed from Javascript to Splunk XML code dynamically. For instance, by default, the XML dashboard has 2 panels. After that, when javascript file is... See more...
Hi, I would like to have a xml panels code to be passed from Javascript to Splunk XML code dynamically. For instance, by default, the XML dashboard has 2 panels. After that, when javascript file is executed, the panels will be added dynamically accordingly to the conditions of the user to have 3 or more panels based on conditions. I have tried to passed the XML panel code from Javascript as a token to XML code but the Dashboard does not display the panel in Dashboard. Following is a sample code I have. XML <dashboard> ....... <row>        $table1$ </row> ........ </dashboard>   Javascript const panel = '<panel><table><search><query>index=_internal  | stats count by sourcetype</query><earliest>-24h@h</earliest><latest>now</latest><sampleRatio>1</sampleRatio></search><option name="count">20</option><option name="dataOverlayMode">none</option><option name="drilldown">none</option><option name="percentagesRow">false</option><option name="rowNumbers">false</option><option name="totalsRow">false</option><option name="wrap">true</option></table></panel>'         var parser = new DOMParser();         var xmlDoc = parser.parseFromString(panel, "text/xml"); //important to use "text/xml"         tokens.set("table1", xmlDoc.documentElement);         submitted_tokens.set(tokens.toJSON());   May I know how to solve this, please? Thank you in advance.  
Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 140: is_template (value: 0).  Invalid key in stanza [WSUS_vUpdateInstall... See more...
Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 140: is_template (value: 0).  Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 141: use_json_output (value: 0). can anyone please suggest what valid keynames can I use in place of these two? also, this problem has occured after migration and upgradation  
Hi, I have the same issue but its not working for me.. I first created the lookup and save the search as a report, and then i need to edit my query to append ONLY new values. The current query d... See more...
Hi, I have the same issue but its not working for me.. I first created the lookup and save the search as a report, and then i need to edit my query to append ONLY new values. The current query does not push values at all.     index="rapid7_threat_intelligence" type="Domain" |table _time, source, type, value |outputlookup DOMAIN_IOC_ACTIVE.csv append=true | append [ | inputlookup append=true DOMAIN_IOC_ACTIVE.csv] | dedup value
Another sign something is not right - when I construct the database query as described in the Snowflake Integration blog post the resulting SQL string is completely malformed.   SQL string from the... See more...
Another sign something is not right - when I construct the database query as described in the Snowflake Integration blog post the resulting SQL string is completely malformed.   SQL string from the integration post:  SQL string results when I select the same options:  I am able to construct the SQL string using the selection options, however each selection takes another 4-5min to load. 
I need a query to get the new created use cases in the last 7 days and another query to get the fine tuned use cases for the last 7 days.
Hello all !  Can anyone help me in editing the below SPL so it can only list the _key - value paris for the entities ?  | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/enti... See more...
Hello all !  Can anyone help me in editing the below SPL so it can only list the _key - value paris for the entities ?  | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text fields="_key,title,identifier,informational,identifying_name" | eval value=spath(value,"{}") | mvexpand value | eval entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name"), entity_aliases=mvzip(spath(value, "identifier.fields{asd}"),spath(value, "identifier.values{}"),"="), entity_info=mvzip(spath(value, "informational.fields{}"),spath(value, "informational.values{}"),"=") This is a solution i found online but its rather complicated for my SPL skills at the moment....  Thank you in advance ! 
Hi @Rao_KGY , the code of your dashboard is correct (except the difference I already described), I don't see any reason because two equal searches give different results. Try to manually copy one o... See more...
Hi @Rao_KGY , the code of your dashboard is correct (except the difference I already described), I don't see any reason because two equal searches give different results. Try to manually copy one on the other, maybe there's some char that we don't see. Ciao. Giuseppe
I just stumbled on this and thought I'd add a few other notes on this... with props.conf in the stanza "sourcetype" where this props.conf creates a field called "action". Just to confirm, how is ... See more...
I just stumbled on this and thought I'd add a few other notes on this... with props.conf in the stanza "sourcetype" where this props.conf creates a field called "action". Just to confirm, how is the field being created? I'm assuming you mean a search time field as opposed to an index time field. Skimming the #Mimecast for Splunk app it looks like there are field aliases, and eval statements for different source types around the `action` field which are both search time... but I could be missing something. If you were instead referring to an index time transformation, not only is precedence order different, but also reingestion of data would need to happen before things take effect. Speaking of precedence order, probably time to mention that search time attributes use user / app precedence order as documented: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles#Precedence_within_app_or_user_context  Some effects of this is 1) your app needs to be lexographically after the app you're trying to override (precedence by name of other apps is reverse lexographic order as opposed to forward lexographic order) 2) your app needs to export the corresponding settings 3) even with all that done, your settings from other apps will lose if searches are being launched from within the mimecast app because the current app's settings gets highest precedence for the same stanza. (may or may not be an issue, I'm not familiar enough with the app to say, but it is a potential edge case).  Here's where we need to address something else:  Splunk Cloud doesn't allow using local folder Splunk Cloud doesn't let you upload an app with a local folder, however, Calculated fields, and field aliases are both editable via the UI, so you could actually create a local override within the context of the original TA itself via the UI and those would win. (since merging between default and local per app happens first with user/app context resolution order). No additional app needed necessarily (that gets into a long term management discussion).  @richgalloway already provided an alternative solution since within props.conf, there is an additional merging between stanzas for sourcetypes, hosts, and sources... But if course need to be careful with those since you could affect other sourcetypes too. From the props.conf.spec: **[<spec>] stanza precedence:** For settings that are specified in multiple categories of matching [<spec>] stanzas, [host::<host>] settings override [<sourcetype>] settings. Additionally, [source::<source>] settings override both [host::<host>] and [<sourcetype>] settings. In either case once settings are in place on the search head, they need to be replicated to your indexers as part of the knowledge bundle before they can take effect during a search... So if you're already over the 3GB limit there need to spend some time trimming the bundle size.  Seeing resolved search time precedence can be done per stanza with the properties rest endpoint on the search head, and/or the btool command. (make sure to specify the appropriate `--app` and `--user` context for correct resolution order of search time values).  (And before you say but btool is an enterprise only command.. I may have brought it to Cloud as a SPL command along with a knowledge bundle utility in Admins Little Helper for Splunk ... my officially unsupported but I think its useful side project Check it out on Splunkbase: https://splunkbase.splunk.com/app/6368 </shameless plug> )  Hope these notes help you and others in the future.
We have enabled Microsoft SAML for Splunk and out splunkd.log seems to be flooded with warnings like this: WARN UserManagerPro [7456 SchedulerThread] - AQR and authentication extensions not supporte... See more...
We have enabled Microsoft SAML for Splunk and out splunkd.log seems to be flooded with warnings like this: WARN UserManagerPro [7456 SchedulerThread] - AQR and authentication extensions not supported. Or authentication extensions is supported but is used for tokens only. user=nobody information not found in cache Found a few threads on AQR: - https://community.splunk.com/t5/Monitoring-Splunk/What-is-quot-AQR-quot-and-why-is-it-throwing-warning-messages-in/m-p/347016/highlight/true#M3064 - https://community.splunk.com/t5/Security/How-do-you-resolve-splunk-log-error-messages-after-switching/m-p/354479/highlight/true#M8882 Also the documentation on authentication.conf does not help me much. It seems the only way is to create a low level user (same as mentioned in the error) to suppress the error, which seems doable but I doubt this is the best way and unsure of side effects? Does any of you know more? 
@skalliger wrote: How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) a... See more...
@skalliger wrote: How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) as well as recommended best practice encryption settings.  Nice work @skalliger . Is this app publically available? I would be interested in this functionality.
Hey @gcusello  seems like there is some confusion. My issue is still stand UNRESOLVED & trying to get resolution.  Thanks
You should look at the _internal index size before changing it   | dbinspect index=_internal | stats sum(sizeOnDiskMB)    In our setup with 30 days of internal index, size of file is 680GB.  So d... See more...
You should look at the _internal index size before changing it   | dbinspect index=_internal | stats sum(sizeOnDiskMB)    In our setup with 30 days of internal index, size of file is 680GB.  So double the time, double this file. As you see above, I posted a dashboard uses _telemetry index instead to see index uses.  Our telemetry index is 215 MB of size.  We have change  _telemetry to 20 years, since it does not take up much space:   [_telemetry] # This is 20 years to get longer historical license useage. frozenTimePeriodInSecs = 788400000    
Hi @Rao_KGY , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community (even if your answer). Ciao and happy splunking Giu... See more...
Hi @Rao_KGY , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community (even if your answer). Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors