All Topics

Top

All Topics

I plan to develop a customize visualization. I edit a formatter.html  <form class="splunk-formatter-section" section-label="Data Series"> <splunk-control-group label="Data Type"> <splunk-select id=... See more...
I plan to develop a customize visualization. I edit a formatter.html  <form class="splunk-formatter-section" section-label="Data Series"> <splunk-control-group label="Data Type"> <splunk-select id="dataTypeSelect" name="{{VIZ_NAMESPACE}}.dataType" value="Custom"> <option value="Custom">Custom</option> <option value="XBar_R-X">XBar R - X</option> <option value="LineChart">LineChart</option> <option value="Pie">Pie</option> <option value="Gauge">Gauge</option> </splunk-select> </splunk-control-group> <splunk-control-group label="Option"> <splunk-text-area id="optionTextArea" name="{{VIZ_NAMESPACE}}.option" value="{}"> </splunk-text-area> </splunk-control-group>... I wish to change dataType, then textarea option have diffenent value to appear in format menu. Menu Option  have many choice, How to modify  visualization_source.js content to get this?
Hi, I recently tried creating a private app on Splunk Cloud, the app is getting created successfully, but it does not show nor display in the list of apps which are on the Splunk Cloud. I tried to ... See more...
Hi, I recently tried creating a private app on Splunk Cloud, the app is getting created successfully, but it does not show nor display in the list of apps which are on the Splunk Cloud. I tried to create the app using both barebones and sample_app as a template with different App IDs but it didn't work, however the app is getting created and there's no error being displayed for the same, also I kept the visibility as yes. Please can someone assist me on this? Thanks!
Hi guys,   i have the following query that produces table below   index=core_ct_report_* | eval brand=case(like(report_model, "cfg%"), "grandstream", like(report_model, "cisco%"), "Cisco", l... See more...
Hi guys,   i have the following query that produces table below   index=core_ct_report_* | eval brand=case(like(report_model, "cfg%"), "grandstream", like(report_model, "cisco%"), "Cisco", like(report_model, "ata%"), "Cisco", like(report_model, "snom%"), "Snom", like(report_model, "VISION%"), "Snom", like(report_model, "yealink%"), "Yealink", 1=1, "Other") | stats count by fw_version,report_model,brand | table brand report_model fw_version count |sort report_model, count desc In this table i want to group the rows with the same value in report_model column, i use stats values() to achive that as follows index=core_ct_report_* | eval brand=case(like(report_model, "cfg%"), "grandstream", like(report_model, "cisco%"), "Cisco", like(report_model, "ata%"), "Cisco", like(report_model, "snom%"), "Snom", like(report_model, "VISION%"), "Snom", like(report_model, "yealink%"), "Yealink", 1=1, "Other") |stats count by fw_version,report_model,brand | stats values(brand) as brand values(fw_version) as fw_version values(count) as count by report_model |table brand report_model fw_version count   but with this query the count is also grouped, on 6th row there are count values missing, the count missing has the value 1 so only one '1' is showed. i can't remove count from stats values() or the count values doesn't appear in final table. What i'm doing wrong?   Thanks in advance for your help.  
[serversindex] Configuration initialization for /opt/splunk/var/run/searchpeers/serverhead-1721913866 took longer than expected (1002ms) when dispatching a search with search ID remote_serverhead_u... See more...
[serversindex] Configuration initialization for /opt/splunk/var/run/searchpeers/serverhead-1721913866 took longer than expected (1002ms) when dispatching a search with search ID remote_serverhead_userxx__userxx__search__search1_1723144245.50. This usually indicates problems with underlying storage performance.
I have a custom command that calls a script for nslookup and returns the data to splunk. All of it is working but I want to use this custom command in Splunk to return the data to an eval and output ... See more...
I have a custom command that calls a script for nslookup and returns the data to splunk. All of it is working but I want to use this custom command in Splunk to return the data to an eval and output that into a table. For example, the search string would look something like the following:    index="*" | iplocation src_ip | eval testdata = | nslookupsearch dest_ip | table testdata _time | sort - _time   NOTE: This is not the exact search string, this is just a mock string. When I run:   | nslookupsearch Record_Here   I get the correct output and data that I want to see. But when I run the command to attach the returned value to an eval, it fails. I keep getting errors on doing this but I can't find something that will work like this. The testdata eval keeps failing. 
HI All, I am new to using Splunk.  I am uploading a CSV to Splunk that has a column called 'Transaction Date' with the entries in DD/MM/YYYY format as shown below. At the Set Source Type step ... See more...
HI All, I am new to using Splunk.  I am uploading a CSV to Splunk that has a column called 'Transaction Date' with the entries in DD/MM/YYYY format as shown below. At the Set Source Type step I have updated the timestamp format to avoid getting the default modtime. I have updated it with %d/%m/%Y as shown below. This partly works as my '_time' field no longer shows the default modtime. However it shows the date in the incorrect format of MM/DD/YYYY instead of DD/MM/YYYY. (also shown below)     Everything else I have left as default. These are my advanced settings: Any Ideas how I can fix this to display the correct format?  Thank you!
Pretty green with SOAR and haven't been able to find an good answer to this. All of our events in SOAR are generated by pulling them in from Splunk ES.  This creates one artifact for each event.  I'... See more...
Pretty green with SOAR and haven't been able to find an good answer to this. All of our events in SOAR are generated by pulling them in from Splunk ES.  This creates one artifact for each event.  I'm looking for a way to extract data from that artifact so we can start using and labeling that data. Am I missing something here?  I haven't found much in the way of training on the data extraction part of this, so any tips for that would be great too.  
Hello, I have a 4 servers A, B C, & D. These servers points to two different DS. A & B points to US DS server, C & D servers points to UK DS Server. I'm selecting these 4 servers in an multise... See more...
Hello, I have a 4 servers A, B C, & D. These servers points to two different DS. A & B points to US DS server, C & D servers points to UK DS Server. I'm selecting these 4 servers in an multiselect value and it has to show two different panels. (hide initially) But, If i select only A & B it has show only US DS panel. (I don't want to show the DS values in the input values.  
I am trying to create a dashboard that uses a search that has a 6 digit number but need the a decimal on the last 2 numbers.  This is the result I get. index=net Model=ERT-SCM EM_ID=Redacted | st... See more...
I am trying to create a dashboard that uses a search that has a 6 digit number but need the a decimal on the last 2 numbers.  This is the result I get. index=net Model=ERT-SCM EM_ID=Redacted | stats count by Consumption 199486 I would like it shown like this. 1994.86 Kwh I have tried this but only gives me the last 2 numbers with a decimal | rex mode=sed field=Consumption "s/(\\d{4})/./g"
I tried to use "customized in source" option in Splunk Cloud (9.1.2312.203) Dashboard Studio to create a Single Value which background color is controlled by search result. However the code does no... See more...
I tried to use "customized in source" option in Splunk Cloud (9.1.2312.203) Dashboard Studio to create a Single Value which background color is controlled by search result. However the code does not work.  The same code below is tested with statics option which works well. Below is Dashboard JSON { "visualizations": { "viz_74mllhEE": { "type": "splunk.singlevalue", "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)", "sparklineValues": "> primary | seriesByName('background_color')", "sparklineDisplay": "off", "trendDisplay": "off", "majorColor": "#0877a6", "backgroundColor": "> primary | seriesByName('background_color')" }, "dataSources": { "primary": "ds_00saKHxb" } } }, "dataSources": { "ds_00saKHxb": { "type": "ds.search", "options": { "query": "| makeresults \n| eval background_color=\"#53a051\"\n" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_74mllhEE", "type": "block", "position": { "x": 0, "y": 0, "w": 250, "h": 250 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "ztli_test" }
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut E... See more...
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut Errors "E"? Thanks.   For your reference, here is a link to IBM's WebSphere log interpretation: ibm.com/docs/en/was/8.5.5?topic=SSEQTP_8.5.5/…
Hello, I am struggling on figuring out how this request can be achieved.  I need to report on events from an API call in Splunk, However, the API call requires variables from another API call.  I ha... See more...
Hello, I am struggling on figuring out how this request can be achieved.  I need to report on events from an API call in Splunk, However, the API call requires variables from another API call.  I have been testing with the Add-On Builder and can make the initial request.  I'm seeing the resulting events in Splunk Search, but I can't figure out how to create a secondary API call that could use the fields as variables in the secondary args or parameters fields. I was trying to use the API module, because I'm not fluent at all with scripting. Thanks for any help on this, it is greatly appreciated, Tom
In Current Splunk deployment  we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the function... See more...
In Current Splunk deployment  we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the functions.   so  is there High Availability option available for Heavy forwarder OR for DB connect APP ?
Is it possible to get each day first login event( EventCode=4634)  as "logon" and Last event of   (EventCode=4634) as Logoff and calculate total duration . index=win sourcetype="wineventlog" Eve... See more...
Is it possible to get each day first login event( EventCode=4634)  as "logon" and Last event of   (EventCode=4634) as Logoff and calculate total duration . index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 NOT | eval action=case((EventCode=4624), "LOGON", (EventCode=4634), "LOGOFF", true(), "ERROR") | bin _time span=1d | stats count by _time action user
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth  to Forward the da... See more...
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth  to Forward the data from Site1 to Site2.
Hello, Could anyone please tell me how I can disable SSL Verification for the Add-On Builder?  I can't figure out where the parameter is located at. Thank you for any help on this one, Tom  
Using the classic type dashboards I'm able to have simple script run on load of the dashboard by adding something like: <dashboard script="App_Name:script_name.js" version="1.1"> But adding t... See more...
Using the classic type dashboards I'm able to have simple script run on load of the dashboard by adding something like: <dashboard script="App_Name:script_name.js" version="1.1"> But adding this to a dashboard created using Dashboard Studio the script does not run. How do you get a script to run on load of a dashboard that was created with Dashboard Studio?   
Intro In our previous post, we walked through integrating our Kubernetes environment with Splunk Observability Cloud using the Splunk Distribution of the OpenTelemetry Collector for Kubernetes. In ... See more...
Intro In our previous post, we walked through integrating our Kubernetes environment with Splunk Observability Cloud using the Splunk Distribution of the OpenTelemetry Collector for Kubernetes. In this post, we’ll look at the general Splunk Distribution of the OpenTelemetry Collector and dive into the configuration for a Collector deployed in host (agent) monitoring mode. We’ll walk through the different pieces of the config so you can easily customize and extend your own configuration. We’ll also talk about common configuration problems and how you can avoid them so that you can seamlessly get up and running with your own OpenTelemetry Collector.    Walkthrough After you’ve installed the OpenTelemetry Collector for Linux or Windows, you can locate configuration files under either the /etc/otel/collector directory for Linux or the \ProgramData\Splunk\OpenTelemetry Collector\agent_config.yaml for Windows. You’ll notice several Collector configuration files live under this directory –  a gateway_config used to configure Collectors deployed in data forwarding (gateway) mode, an otlp_config_linux configuration file for exporting OpenTelemtry traces to Splunk, configuration files designed for use with AWS ECS tasks, etc. Because we’re looking at configuring our application’s instrumentation and collecting host and application metrics, we will focus on the agent_config.yaml Collector configuration file. When you open up this config, you’ll notice it’s composed of the following blocks:  Extensions Receivers Processors Exporters Service Extensions In the extensions block of the Collector config, you’ll find components that extend Collector capabilities. This section defines things like health monitoring, service discovery, data forwarding – anything not directly involved with processing telemetry data.  The Splunk Distribution of the OpenTelemetry Collector defines a few default extensions:  health_check: sets an HTTP URL that can be hit to check server availability and uptime http_forwarder: accepts HTTP requests and optionally adds headers and forwards them smart_agent: gets metrics for the host OS  zpages: serves zPages, an HTTP endpoint for live debugging different components  Receivers  Receivers are responsible for getting telemetry data into the Collector. This section of the configuration file is where data sources are configured.  In this example config file, we have several default receivers configured:  fluentforward: receives log data through Fluentd Forward protocol hostmetrics: collects metrics about the system itself jaeger: receives traces in Jaeger format otlp: receives metrics, logs, and traces through gRPC or HTTP in OTLP format prometheus: collects metrics from the collector itself – hence the /internal smartagent: legacy SignalFX Smart Agent monitors used to send metric data (soon to be deprecated in favor of native OpenTelemetry receivers) signalfx: receives metrics and logs in protobuf format zipkin: another Collector-supported trace format Processors  Processors receive telemetry data from the receivers and transform the data based on rules or settings. For example, a processor might filter, drop, rename, or recalculate telemetry data.  batch: without parameters tells the Collector to batch data before sending it to the exporters. This way, not every single piece of telemetry data is sent off to the exporter as it’s processed.  memory_limiter: limits the memory a collector can use to ensure consistent performance resourcedetection: detects system metadata from the host – region, OS, cloud provider, etc.  Exporters  This is the configuration section that defines what backends or destinations telemetry data will be sent off to.  sapm (Splunk APM exporter): exports traces in a single batch to optimize network performance signalfx: sends metrics, events, and traces to Splunk Observability Cloud  splunk_hec: sends telemetry data to a Splunk HEC endpoint to send logs and metrics to Splunk Enterprise or Splunk Cloud Platform splunk_hec/profiling: sends telemetry data for AlwaysOn Profiling to a Splunk HEC endpoint otlp: sends data through gRPC using OTLP format, especially useful when deploying in forwarding (gateway) mode debug: enables debug logging Service The service block is where the previously configured components (extensions, receivers, processors, exporters) are enabled within the pipelines.  telemetry.metrics: emit telemetry metrics about the Collector itself extensions: where all previously defined extensions are enabled for use pipelines: defined for each type of data – metrics, logs, traces Problems There are a few problems you might run into when configuring your OTel Collector. Common issues are caused by:  Incorrect indentation Receivers configured but not enabled in a pipeline  Receivers enabled in a pipeline but not configured  All receivers, exporters, and processors used in a pipeline must support the particular data type (metrics, logs, traces) Indentation is a very common problem. Collector configs are in YAML, which is indentation-sensitive. Using a YAML linter can help you verify that indentation has been maintained successfully. The good news is that the Collector fails fast – if the indentation is incorrect, the Collector will not start so you can identify and fix the problem. If you’ve set up your Collector, but data isn’t appearing in the backend, there’s a high chance receivers are being configured but subsequently aren’t enabled in a pipeline. After each pipeline component is configured, it must be enabled in a pipeline under the service block of the config.  If the specified data type in receivers, exporters, and processors aren’t supported, you’ll encounter an ErrDataTypeIsNotSupproted error. Confirm the pipeline types of the different Collector components to ensure the data types in the config are supported.  You can always ensure your Collector is up and running with the health check extension, which is on by default with the Splunk Distribution of the OpenTelemetry Collector. From your Linux host, open http://localhost:13133. If your Collector service is up and running, you’ll see a status of “Server available”.  You can also monitor all of your Collectors with Splunk Observability Cloud’s built-in dashboard. Data for this dashboard is configured in the metrics/internal section of the configuration file under the Prometheus receiver.  Wrap up  To help you through the configuration of your own OTel Collector, we walked through the config file for the Splunk Distribution of the OpenTelemetry Collector and called out potential problems you might run into with the config. If you don’t already have an OpenTelemetry Collector installed and configured, start your Splunk Observability Cloud 14 day free trial and get started with the Splunk Distribution of the OpenTelemetry Collector.  Resources Splunk OTel Collector Configuration Overview OpenTelemetry Collector Configuration Collector components Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on a Linux host
HI  in splunkd.log file I am seeing: TailReader [260668 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' and In splunk, I am seeing the logs a... See more...
HI  in splunkd.log file I am seeing: TailReader [260668 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' and In splunk, I am seeing the logs as well Basically, I want to know that is happening here. this tracker.log file should be under index=_internal but somehow this file is present under index=linux  and in Linux TA, I can see the [linux_audit] sourcetype config under props.conf.  who is calling this as I am not seeing any related input parameter for this. Kind Regards, Rashid    
Hello Everyone, I have written the splunk query to remove last 2 character from the string: processingDuration = 102ms  as 102 for the following log:     { "timestamp": "2029-02-29 07:32:54.734... See more...
Hello Everyone, I have written the splunk query to remove last 2 character from the string: processingDuration = 102ms  as 102 for the following log:     { "timestamp": "2029-02-29 07:32:54.734", "level": "INFO", "thread": "54dd544ff", "logger": "my.logger", "message": { "logTimeStamp": "2029-02-29T07:32:54.734494726Z", "logType": "RESP", "statusCode": 200, "processingDuration": "102ms", "headers": { "Content-Type": [ "application/json" ] }, "tracers": { "correlation-id": [ "hfkjhwkj98342" ], "request-id": [ "53456345" ], "service-trace-id": [ "34234623456" ] } }, "context": "hello-service" }     my splunk query:     index=my_index | spath logger | search logger="my.logger" | spath "message.logType" | search "message.logType"=RESP | spath "message.tracers.correlation-id{}" | search "message.tracers.correlation-id{}"="hfkjhwkj98342" | eval myprocessTime = substr("message.processingDuration", 1, len("message.processingDuration")-2) | table "message.tracers.correlation-id{}" myprocessTime     the above query considers "message.processingDuration" as string itself and removes last 2 characters out of it. I tried without double quotes also, it returned empty:     substr(message.processingDuration, 1, len(message.processingDuration)-2)      Appreciate your help on this. Thanks in advance.