All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Seems the answer is no from the strict sense of the rules. https://docs.splunk.com/Documentation/Splunk/9.4.0/DashStudio/chartsPie#Pie_chart_options labelDisplay ("values" | "valuesAndPercentag... See more...
Seems the answer is no from the strict sense of the rules. https://docs.splunk.com/Documentation/Splunk/9.4.0/DashStudio/chartsPie#Pie_chart_options labelDisplay ("values" | "valuesAndPercentage" | "off") values Specify whether to display the labels and/or slice percentages. The options seem to have values in any option or to turn off everything. Potential Work Around Try appending some transforms to your data set to add the Percentage and remove the values.  Then set the "labelDisplay" to only "values" which should be a percentage value.
Hi @rahulkumar , in this way, you extract the host field from host.name json field. Then you can extract other relevant for you fields. At least you remove all except message and put this field in... See more...
Hi @rahulkumar , in this way, you extract the host field from host.name json field. Then you can extract other relevant for you fields. At least you remove all except message and put this field in _raw, in this way you have the original event, before logstash ingestion. Remember that the order of execution is relevant, for this reason, in props.conf, there's a progressive number in transformation names. Ciao. Giuseppe
Sorry are you: 1) Trying to make a single Search Head Deployer serve 2 individual clusters? OR 2) Trying to move a single Search Head Deployer away from Cluster X and now server Cluster Y
Well I suppose you could add these to the manifest file but I can't stress how much I wouldn't do that.   That said I was under the impression that at some point in Splunk 8.x and forward they stop... See more...
Well I suppose you could add these to the manifest file but I can't stress how much I wouldn't do that.   That said I was under the impression that at some point in Splunk 8.x and forward they stopped allowing Splunk processes to reference or pull any Python 2.x capabilities and libraries.  Now I'm not strictly very experienced in that area so I could easily be wrong. Morrel of the story: Get off python 2.x reliance ASAP.
Actually it is a new project and creating sample dashboards for application teams. Just want to check any use cases I can get related to my fields given above...
First, one should not be receiving UDP (or TCP) directly into a Splunk instance.  Instead, use a dedicated syslog receiver (such as syslog-ng) to save the data to disk and monitor the files with a Un... See more...
First, one should not be receiving UDP (or TCP) directly into a Splunk instance.  Instead, use a dedicated syslog receiver (such as syslog-ng) to save the data to disk and monitor the files with a Universal Forwarder. If that's not feasible, try these props to better extract timestamps from the events. [my_sourcetype] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\< TIME_PREFIX = \>\s TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%z MAX_TIMESTAMP_LOOKAHEAD = 32 EXTRACT-HostName = \b(?P\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+(-\d{2}:\d{2})?)
so the example you sent this same has to be set only in transform.conf have to change host.name right?
Hi, How will work the schedule jobs that perform the api requests (input/output) when deploying, on a Search Head cluster, an TA package that was created by Add-on builder? Is there any mechani... See more...
Hi, How will work the schedule jobs that perform the api requests (input/output) when deploying, on a Search Head cluster, an TA package that was created by Add-on builder? Is there any mechanism similar to DB Connect ? (DB Connect provides high availability on Splunk Enterprise with a Search Head Cluster, by executing input/output on the captain) Thank you, José
Is this the up to date method? Are there apps that do this and is there documentation?  
Hi @Splunked_Kid , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @user3344 , i suppose that you're speaking of Splunk as syslog receiver. I tried in many ways to avoid to add this prefix without any result, every syslog receiver adds at the beginning of the e... See more...
Hi @user3344 , i suppose that you're speaking of Splunk as syslog receiver. I tried in many ways to avoid to add this prefix without any result, every syslog receiver adds at the beginning of the event the timestamp and the sender ip. It isn't a problem from Splunk, if you use rsyslog as receiver, you have the same behaviour. The only way is, if it's wrong, to modify the timestamp format to take the second one and not the one added by the syslog receiver. You could try to remove the header using SEDCMD command in props.conf, but anyway, you have to configure the second timestamp as the correct timestamp. Ciao. Giuseppe
This is probably why I'm getting confused. Jan 100 Feb 105 March 90   Given this table, I want to have a single value trellis view that depicts the month total , with a trend compar... See more...
This is probably why I'm getting confused. Jan 100 Feb 105 March 90   Given this table, I want to have a single value trellis view that depicts the month total , with a trend compared to the previous month. Jan 100 (no trend) , Feb 105 (shows trend up 5), March 90 (shows trend down 15). I'm going to play with timewrap, maybe I'm going down the wrong path.
Hello community, I need help with configuring Splunk to correctly process timestamp information in my UDP messages. When I send messages starting with a pattern like <\d+>, for example:   <777> 20... See more...
Hello community, I need help with configuring Splunk to correctly process timestamp information in my UDP messages. When I send messages starting with a pattern like <\d+>, for example:   <777> 2025-01-03T06:12:19.236514-08:00 hello world   Splunk substitutes the original timestamp with the current date and local host address. Consequently, what I see in Splunk is:   Jan 28 14:27:25 127.0.0.1 2025-01-03T06:12:19.236514-08:00 hello world   I would like to know how to disable this behavior so that the actual timestamp from the message is preserved in the event. I have attempted to configure TIME_FORMAT and TIME_PREFIX in the props.conf file, but it seems those settings are applied after Splunk substitutes the timestamp with the current date and local host. As a workaround, I implemented the following in props.conf:   [my_sourcetype] EXTRACT-HostName = \b(?P\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+(-\d{2}:\d{2})?) EVAL-_time = strptime(extracted_time, "%Y-%m-%dT%H:%M:%S.%6N%z")   Is there a better way to achieve this? Any guidance would be greatly appreciated! Thank you!
Hey, We are currently ingesting wineventlog from some of our Azure VMs via Eventhub. As such, their assigned sourcetype is the eventhub sourcetype, which means they are not subject to the wineventlo... See more...
Hey, We are currently ingesting wineventlog from some of our Azure VMs via Eventhub. As such, their assigned sourcetype is the eventhub sourcetype, which means they are not subject to the wineventlog field extractions. These logs do contain a "raw xml" data field, which is like xmlwineventlog, however, using things such as xmlkv, spath or xpath don't work as intended, and require additional work to extract the data correctly. Unfortunately due to this, we are unable to dump this extra SPL into a field extraction or calculated field. Please find the SPL below:   index=azure sourcetype=mscs:azure:eventhub category="WindowsEventLogsTable" | fields properties.RawXml | spath input=properties.RawXml | eval test=mvzip('Event.EventData.Data{@Name}', 'Event.EventData.Data', "=") | mvexpand test | rex field=test max_match=0 "^(?<kv_key>[^=]+)=(?<kv_value>[^=]+)$" | eval {kv_key}=kv_value     The end goal with this is to successfully extract the relevant windows event fields so that it can be datamodel mapped. It's not possible to install UFs onto these VMs, so unfortunately not the solution here. We'd ideally also want to avoid using "collect" to duplicate the data to a separate index/sourcetype. Has anyone else encountered this and managed to come up with a solution? Thanks
Hi @roopeshetty ,   The proxy config should be in its own stanza, not the [general] one.     [proxyConfig] http_proxy = <string that identifies the server proxy. When set, splunkd sends all HTTP... See more...
Hi @roopeshetty ,   The proxy config should be in its own stanza, not the [general] one.     [proxyConfig] http_proxy = <string that identifies the server proxy. When set, splunkd sends all HTTP requests through this proxy server. The default value is unset.> https_proxy = <string that identifies the server proxy. When set, splunkd sends all HTTPS requests through the proxy server defined here. If not set, splunkd uses the proxy defined in http_proxy. The default value is unset.> no_proxy = <string that identifies the no proxy rules. When set, splunkd uses the [no_proxy] rules to decide whether the proxy server needs to be bypassed for matching hosts and IP Addresses. Requests going to localhost/loopback address are not proxied. Default is "localhost, 127.0.0.1, ::1">   Once you make the changes and restart, run a btool to make sure the server is getting it correctly from your configset: /<splunk_home>/bin/splunk btool server list --debug | grep proxy All the configurations returned are the ones being used by the system, confirm if all your custom configs are here and if there are not overlays taking precedence over them.  
https://docs.splunk.com/Documentation/Splunk/latest/Alert/EmailNotificationTokens#Result_tokens You need to use $result.your_field_name$ in your case it will be $result.Total_Count$
The main question is what the dashboard is supposed to be for. Are you solving some problem from within your organization? In such case - as @richgalloway pointed out - you should have requirements ... See more...
The main question is what the dashboard is supposed to be for. Are you solving some problem from within your organization? In such case - as @richgalloway pointed out - you should have requirements for this dashboard. Are you preparing a PoC/PoV as a partner? Consult partner portal resources for existing demo resources. Are you looking to expand existing Splunk infrastructure within your company to different divisions and use cases? Consult potential stakeholders and check what would be their expectations on the product and try to make something targeting their needs. The general answer is "depends on what you have and what you need".
Please let me know if any panel needs to be modified or more detailed than this basic ones. Also please suggest if any new panel can be added. Please suggest any drilldowns as well. These are qu... See more...
Please let me know if any panel needs to be modified or more detailed than this basic ones. Also please suggest if any new panel can be added. Please suggest any drilldowns as well. These are questions only your stakeholders can answer.  If the proposed panels answer the questions they have or solve their problems then modifications may not be necessary.
So I have my Query working and I have a webhook created in a Channel It says that I can send Tokens when I send the Alert - It says the Message can include tokens that insert text based on the resul... See more...
So I have my Query working and I have a webhook created in a Channel It says that I can send Tokens when I send the Alert - It says the Message can include tokens that insert text based on the result of search query My Field / Label I created was Total_Count How do I pass that as a Token?