All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use th... See more...
As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use the Splunk OTel Collector to collect logs from Windows or Linux Hosts and send those logs directly to Splunk Enterprise/Cloud as well. However this information isn't easily found from the documentation as it appears the standalone (non Helm Chart) distribution of the OTel Collector can only be used for Splunk Observability. In the below instructions, I will show you how to install the Collector even if you have don't have an Splunk Observability (O11y) subscription. In terms of compatibility, the Splunk OTel Collector is supported on the following Operating Systems: Amazon Linux: 2, 2023. Log collection with Fluentd is not currently supported for Amazon Linux 2023. CentOS, Red Hat, or Oracle: 7, 8, 9 Debian: 9, 10, 11 SUSE: 12, 15 for version 0.34.0 or higher. Log collection with Fluentd is not currently supported. Ubuntu: 16.04, 18.04, 20.04, 22.04, and 24.04 Rocky Linux: 8, 9 Windows 10 Pro and Home, Windows Server 2016, 2019, 2022 Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector: First, use sudo to export the following variable. This variable will be referenced by the Collector and will verify that you aren't installing the Collector for Observability where an Access Token needs to be specified:     sudo export VERIFY_ACCESS_TOKEN=false       Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector (in this example we are going to use curl but there are other installation methods that can be found here).     curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --hec-token <token> --hec-url <hec_url> --insecure true​   You may notice we modify the installation script from the original instructions, we specify the HEC Token and HEC Url of the Splunk Instance you want to send your logs to. Please note that both the HEC Token and HEC Url are required fields to specify for the installation to work correctly.  Your installer should then install and start sending logs over to Splunk Instance (assuming your network allows the traffic out) automatically; if you want to know what log ingestion methods are configured out of the box please see the default pipeline for the OTeL Collector as specified here. What if you want your Splunk OTel Collector to send logs to Enterprise/Cloud and you also want to send metrics or traces to Splunk Observability?  If you are in the situation above, then you can modify the installation script we suggest above to include your O11y realm and Access Token in addition to your HEC Url and HEC Token like this:   curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --realm <o11y_realm> --hec-token <token> --hec-url <hec_url> --insecure true -- <ACCESS_TOKEN>​     Please note the Access Token always follows the blank -- template and should always be placed at the end of your installer script for best practice.
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, b... See more...
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, but what is happening is before it loads the data from query 2 it's evaluating and giving wrong result with large data set Sample query looks like this index=a component=serviceA "incoming data" | eventstats values(name) as name ,values(age) as age by id1,id2 |append [search index=a component=serviceB "data from" | eventstats values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2] | stats values(name) as name ,values(age) as age, values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2 | eval mismatch= case(isnull(name) AND isnull(age) ," data doesn't exist in serviceA", isnull(parentName) AND isnull(parentAge) ," data doesn't exist in serviceB", true, "No mismatch") | table name,age,parentAge,parentName,mismatch,id1,id2 so in my case with large data before the dat get's loaded from query2 it's giving as data doesn't exist in serviceB, even though there is no mismatch. Please suggest how we can tackle this situation, I tried using join , but it's same
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a ... See more...
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a field from a lookup with something like this as the actual value: file_path="\\\\*\\branch\\system\\type1\\*" OR file_path="\\\\*\\branch\\system\\type2\\*" I want to populate a field in my lookup table with actual key/value pairs and output the entire string based on a menu selection.  Unfortunately, if I try this, Splunk escapes all the double quotes and all the backslashes and it ends up looking like this in the litsearch, which is basically useless: file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type1\\\\*\" OR file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type2\\\\*\" How can I either properly escape the value within the lookup table so this doesn't happen, or is there any way to get Splunk to output the lookup value as a literal string and not try to interpret it?
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expe... See more...
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expected with  | tstats count as order_count from datamodel=spc_orders however if I try and pivot | tstats count as order_count from datamodel=spc_orders where state="CA" 0 results. Whats going on here?
Hi Team, Below is my raw log I want to fetch 38040 from log please guide ArchivalProcessor - Total records processed - 38040
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventory... See more...
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1\" xsi:schemaLocation=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1 FES_InventoryReservation_create.xsd\"><n1:inventoryReservationCreateRequest><n1:brand>xyz</n1:brand><n1:channel>ABC</n1:channel><n1:bannerID>8669</n1:bannerID><n1:location>WD1234</n1:location><n1:genericLogicalResources><n1:genericLogicalResource><ESILib:skuNumber>194253408031</ESILib:skuNumber><ESILib:extendedProperties><ESILib:extendedProperty><ESILib:name>ReserveQty</ESILib:name><ESILib:values><ESILib:item>1</ESILib:item></ESILib:values></ESILib:extendedProperty></ESILib:extendedProperties></n1:genericLogicalResource></n1:genericLogicalResources></n1:inventoryReservationCreateRequest></n1:Request> how to retrieve the banner ID and location from the above using splunk query. index="abc" sourcetype="oracle:transactionlog" OPERATION ="/service/v1/inventory/reservation" |rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" |spath input=REQUEST |spath input=REQUEST output=Bannerid path=body.n1:Request{}.n1:bannerID |table Bannerid I used the above query but it didnot yeild any results
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to ... See more...
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to deploy Splunk in the new lab. How can I break the 5GB license in to 3GB and 2GB, so I can use that 2GB into a new smaller lab?
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past ... See more...
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past the hour like:   Which these aren't the correct event times.  When I run the exact same search manually, I get the correct event times.   Does anyone know why this is occurring and how I can fix it? Thanks for any help on this one, much appreciated. Tom
Hi, I have incoming data from 2 Heavy Forwarders. Both of forward HEC data and the internal logs, how do I identify which HF is sending a particular HEC data?   Regards, Pravin
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t s... See more...
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t start time. Please guide index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS.UNB."|rex "totalOutputRecords=(?&lt;totalOutputRecords&gt;),busDt=(?&lt;busDt&gt;),fileName=(?&lt;fileName&gt;),totalAchCurrOutstBalAmt=(?&lt;totalAchCurrOutstBalAmt&gt;),totalAchBalLastStmtAmt=(?&lt;totalAchBalLastStmtAmt&gt;),totalClosingBal=(?&lt;totalClosingBal&gt;),totalRecordsWritten=(?&lt;totalRecordsWritten&gt;),totalRecords=(?&lt;totalRecords&gt;)"|eval totalAchCurrOutstBalAmt=tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),1)))|eval totalAchBalLastStmtAmt=tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),1)))|eval totalClosingBal=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1)))|table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords|sort busDt|appendcols[search index="abc"sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "CARS\.UNB(CTR)?\.(?&lt;CARS_ID&gt;\w+)" | transaction CARS_ID startswith="Reading Control-File /absin/CARS.UNBCTR." endswith="Completed Settlement file processing, CARS.UNB." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as CARS.UNB_Duration| table StartTime EndTime CARS.UNB_Duration]| fieldformat StartTime = strftime(StartTime, "%F %T.%3N")| fieldformat EndTime = strftime(EndTime, "%F %T.%3N")|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" "CARS.UNB."|rex "FileEventCreator - Completed Settlement file processing, (?&lt;file&gt;[^ ]*) records processed: (?&lt;records_processed&gt;\d+)"| rename file as Files|rename records_processed as Records| table Files Records]|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully"| head 7 | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True]|rename busDt as Business_Date|rename fileName as File_Name|rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes)|table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarder... See more...
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarders. SSL has already been configured for the web interfaces, but I would now like to secure the remaining components and establish SSL-encrypted connections between them as well. The certificates we are using are self-generated. Could you please guide me on how to proceed with securing all internal communications in this setup? Specifically, I would like to know if I should auto-generate a new certificate for each component and each connection or if there’s an efficient way to manage SSL across the entire environment. Thank you in advance for your help!
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have m... See more...
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have many logs there and I'm confused.     Thank you in advance.
My team has created production environment with 6 syslog servers (2 in each of 3 multi site cluster).  My question is do two syslog servers be active active or one active and one stand by? Which wil... See more...
My team has created production environment with 6 syslog servers (2 in each of 3 multi site cluster).  My question is do two syslog servers be active active or one active and one stand by? Which will be the good practice?  And do load balancer needs here for syslog servers? Currently some app teams are using UDP and some are TCP. basically these are network logs from network devices. Differences bw DNS load balancer and LTM load balancer? Which is best? Please suggest what will be the good practice to achieve this without any data loss?  From syslog servers we have UF installed on it and forward it to our indexer.
I am new to Splunk admin and please explain this following stanzas: We have a dedicated syslog server which receives the logs from network devices and UF installed on the server forwards the data to... See more...
I am new to Splunk admin and please explain this following stanzas: We have a dedicated syslog server which receives the logs from network devices and UF installed on the server forwards the data to our cluster manager. These configs are in cluster manager under manager apps.
Hello Splunkers,     I'm getting proper results without any selction in input dropdown, I can able to download the results of that particular table but when I'm making any selection in dahsboard, s... See more...
Hello Splunkers,     I'm getting proper results without any selction in input dropdown, I can able to download the results of that particular table but when I'm making any selection in dahsboard, since its having the base search, its loading results will all fields in base search rather than the fields mentioned in that table. here is the query, <panel> <title>Raw Data</title> <!-- HTML Panel for Spinner --> <input type="text" token="value" searchWhenChanged="true"> <label>Row Data per Page</label> <default>20</default> <initialValue>20</initialValue> </input> <input type="radio" token="field3" searchWhenChanged="true"> <label>Condition_1</label> <choice value="=">Contains</choice> <choice value="!=">Does Not Contain</choice> <default>=</default> <initialValue>=</initialValue> </input> <input type="text" token="search" searchWhenChanged="true"> <label>All Fields Search_1</label> <default>*</default> <initialValue>*</initialValue> <prefix>"*</prefix> <suffix>*"</suffix> </input> <input type="checkbox" token="field4"> <label>Add New Condition</label> <choice value="0">Yes</choice> </input> <input type="dropdown" token="field5" searchWhenChanged="true" depends="$field4$"> <label>Expression</label> <choice value="AND">AND</choice> <choice value="OR">OR</choice> <default>AND</default> <initialValue>AND</initialValue> </input> <input type="radio" token="field6" searchWhenChanged="true" depends="$field4$"> <label>Condition_2</label> <choice value="=">Contains</choice> <choice value="!=">Does Not Contain</choice> <default>=</default> <initialValue>=</initialValue> </input> <input type="text" token="search2" searchWhenChanged="true" depends="$field4$"> <label>All Fields Search_2</label> <default>*</default> <initialValue>*</initialValue> <prefix>"*</prefix> <suffix>*"</suffix> </input> <html> <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=Event_Logs&amp;outputMode=csv">Download CSV</a> </html> <html depends="$showSpinner3$"> <!-- CSS Style to Create Spinner using animation --> <style> .loadSpinner { margin: 0 auto; border: 5px solid #FFF; /* White BG */ border-top: 5px solid #3863A0; /* Blue */ border-radius: 80%; width: 50px; height: 50px; animation: spin 1s linear infinite; } @keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } } <!-- CSS override to hide default Splunk Search Progress Bar --> #panel1 .progress-bar{ visibility: hidden; } </style> <div class="loadSpinner"/> </html> <table> <search base="base_search_index"> <progress> <!-- Set the token to Show Spinner when the search is running --> <set token="showSpinner3">true</set> </progress> <done> <!-- Unset the token to Hide Spinner when the search completes --> <unset token="showSpinner3"></unset> </done> <query>| sort _time |eval _raw=displayname.","._raw | table _raw | appendpipe [| stats count | where count == 0 | eval _raw="No Data Found for selected time and filters" | table _raw ]</query> <done> <set token="export_sid">$job.sid$</set> </done> </search> <option name="count">$value$</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="_raw"> <colorPalette type="map">{"No Data Found for selected time and filters":#D41F1F}</colorPalette> </format> </table> </panel>
Hello, In Splunk Enterprise security we would like to make it mandatory to define a Notable owner to be able to close a notable. We would like to avoid to have closed notables without assignee/owner... See more...
Hello, In Splunk Enterprise security we would like to make it mandatory to define a Notable owner to be able to close a notable. We would like to avoid to have closed notables without assignee/owner. Is there a way in Splunk Enterprise Security to make the owner required to close a notable ? Than you very much in advance. Happy Splunking. Raphael
Hello guys, I need a help with a dropdown, basically I have this "Stage" column on Splunk dashboard classic, which I can choose the stage of the data. But when I reload the page or open the d... See more...
Hello guys, I need a help with a dropdown, basically I have this "Stage" column on Splunk dashboard classic, which I can choose the stage of the data. But when I reload the page or open the dashboard on the new tab (Or Log in on another device), it returns to default value, which is Pending. This is the XML and the a.js I use: ------XML------- <dashboard version="1.1" script="a.js"> <label>Audit Progression Tracker</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="field2"> <label>Domain Controller</label> <choice value="dc1">Domain Controller 1</choice> <choice value="dc2">Domain Controller 2</choice> <choice value="dc3">Domain Controller 3</choice> <fieldForLabel>Choose DC</fieldForLabel> </input> </fieldset> <row> <panel> <table id="table_id"> <search> <query> index="ad_security_data" | where status ="failed" | table checklist_name, name, mitigation | eval Stage="Pending" </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </dashboard> ------a.js---------- require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { console.log("Script loaded"); var StageDropdownRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { console.log("Checking cell for Stage column:", cell.field); return cell.field === "Stage"; }, render: function($td, cell) { console.log("Rendering cell for Stage column"); var dropdownHtml = ` <select> <option value="Pending" ${cell.value === "Pending" ? "selected" : ""}>Pending</option> <option value="Proceeding" ${cell.value === "Proceeding" ? "selected" : ""}>Proceeding</option> <option value="Solved" ${cell.value === "Solved" ? "selected" : ""}>Solved</option> </select> `; $td.html(dropdownHtml); updateBackgroundColor($td, cell.value); $td.find("select").on("change", function(e) { console.log("Selected value:", e.target.value); updateBackgroundColor($td, e.target.value); }); } }); function updateBackgroundColor($td, value) { var $select = $td.find("select"); // Chọn dropdown (phần tử <select>) if (value === "Proceeding") { $select.css("background-color", "#FFD700"); } else if (value === "Solved") { $select.css("background-color", "#90EE90"); } else { $select.css("background-color", ""); } } // Lấy bảng và áp dụng custom renderer var table = mvc.Components.get("table_id"); if (table) { console.log("Table found, applying custom renderer"); table.getVisualization(function(tableView) { // Thêm custom cell renderer và render lại bảng tableView.table.addCellRenderer(new StageDropdownRenderer()); tableView.table.render(); }); } else { console.log("Table not found"); } }); All I want it to keep it intact whatever I do and It can turn back to Pending every 8 A.M.  Thanks for the help
Hello Splunker!! Hope all is good. I have created a new role in a splunk. I have added some users to that role. I need to restrict that role user to not be able to see the "All Configuration" o... See more...
Hello Splunker!! Hope all is good. I have created a new role in a splunk. I have added some users to that role. I need to restrict that role user to not be able to see the "All Configuration" option in the settings.  Please help me, what settings should I change to get my results?   What I have did so far, but nothing works for me. [role_Splunk_engineer] list_all_configurations = disabled edit_configurations = disabled Thanks in Advance.
I am pretty new to Splunk and my project is also new. Can someone please explain the configurations given in our cluster manager. We have a syslog server which receives logs from F5 WAF devices and U... See more...
I am pretty new to Splunk and my project is also new. Can someone please explain the configurations given in our cluster manager. We have a syslog server which receives logs from F5 WAF devices and UF in syslog server forwards the data to our cluster manager.
i am trying to integrate group ib with splunk for which i installed the app entered my api key and username from which it redirects to the homepage. but all my dashboards are empty and no indexes are... See more...
i am trying to integrate group ib with splunk for which i installed the app entered my api key and username from which it redirects to the homepage. but all my dashboards are empty and no indexes are created? how can i troubleshoot or fix it?