All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @dural_yyz for your prompt response and for providing the documentation. However, I need further assistance regarding the SSL certificates that need to be generated for my Splunk environmen... See more...
Thank you @dural_yyz for your prompt response and for providing the documentation. However, I need further assistance regarding the SSL certificates that need to be generated for my Splunk environment. Could you please clarify whether I need to generate a separate certificate for each component (e.g., search head, indexers, forwarders, etc.)? Additionally, do I need to create different certificates for the various connections between these components?
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to ... See more...
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to deploy Splunk in the new lab. How can I break the 5GB license in to 3GB and 2GB, so I can use that 2GB into a new smaller lab?
@dural_yyz  tried but not working
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past ... See more...
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past the hour like:   Which these aren't the correct event times.  When I run the exact same search manually, I get the correct event times.   Does anyone know why this is occurring and how I can fix it? Thanks for any help on this one, much appreciated. Tom
If HP Nimble Storage solution try: https://splunkbase.splunk.com/app/2840 Other wise I don't see any TA's on Splunkbase for HP storage.
https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS   These articles can explain it much better than I can and it is coming straight from th... See more...
https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS   These articles can explain it much better than I can and it is coming straight from the source.
You only have a sort on Business Date but you never say to sort on Start Time as well.  In fact the field Start Time is evaluated after the sort is done.  If you want a sort it should be done after b... See more...
You only have a sort on Business Date but you never say to sort on Start Time as well.  In fact the field Start Time is evaluated after the sort is done.  If you want a sort it should be done after both fields are available in a sortable format.   | sort "Business_Date" "StartTime"
Hi, I have incoming data from 2 Heavy Forwarders. Both of forward HEC data and the internal logs, how do I identify which HF is sending a particular HEC data?   Regards, Pravin
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t s... See more...
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t start time. Please guide index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS.UNB."|rex "totalOutputRecords=(?<totalOutputRecords>),busDt=(?<busDt>),fileName=(?<fileName>),totalAchCurrOutstBalAmt=(?<totalAchCurrOutstBalAmt>),totalAchBalLastStmtAmt=(?<totalAchBalLastStmtAmt>),totalClosingBal=(?<totalClosingBal>),totalRecordsWritten=(?<totalRecordsWritten>),totalRecords=(?<totalRecords>)"|eval totalAchCurrOutstBalAmt=tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),1)))|eval totalAchBalLastStmtAmt=tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),1)))|eval totalClosingBal=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1)))|table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords|sort busDt|appendcols[search index="abc"sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "CARS\.UNB(CTR)?\.(?<CARS_ID>\w+)" | transaction CARS_ID startswith="Reading Control-File /absin/CARS.UNBCTR." endswith="Completed Settlement file processing, CARS.UNB." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as CARS.UNB_Duration| table StartTime EndTime CARS.UNB_Duration]| fieldformat StartTime = strftime(StartTime, "%F %T.%3N")| fieldformat EndTime = strftime(EndTime, "%F %T.%3N")|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" "CARS.UNB."|rex "FileEventCreator - Completed Settlement file processing, (?<file>[^ ]*) records processed: (?<records_processed>\d+)"| rename file as Files|rename records_processed as Records| table Files Records]|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully"| head 7 | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True]|rename busDt as Business_Date|rename fileName as File_Name|rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes)|table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarder... See more...
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarders. SSL has already been configured for the web interfaces, but I would now like to secure the remaining components and establish SSL-encrypted connections between them as well. The certificates we are using are self-generated. Could you please guide me on how to proceed with securing all internal communications in this setup? Specifically, I would like to know if I should auto-generate a new certificate for each component and each connection or if there’s an efficient way to manage SSL across the entire environment. Thank you in advance for your help!
That's helpful to understand the problem! Still I don't fully understand the solusion. What are my options if I want to expose interval to client and still keep application single instance?
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have m... See more...
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have many logs there and I'm confused.     Thank you in advance.
My team has created production environment with 6 syslog servers (2 in each of 3 multi site cluster).  My question is do two syslog servers be active active or one active and one stand by? Which wil... See more...
My team has created production environment with 6 syslog servers (2 in each of 3 multi site cluster).  My question is do two syslog servers be active active or one active and one stand by? Which will be the good practice?  And do load balancer needs here for syslog servers? Currently some app teams are using UDP and some are TCP. basically these are network logs from network devices. Differences bw DNS load balancer and LTM load balancer? Which is best? Please suggest what will be the good practice to achieve this without any data loss?  From syslog servers we have UF installed on it and forward it to our indexer.
I am new to Splunk admin and please explain this following stanzas: We have a dedicated syslog server which receives the logs from network devices and UF installed on the server forwards the data to... See more...
I am new to Splunk admin and please explain this following stanzas: We have a dedicated syslog server which receives the logs from network devices and UF installed on the server forwards the data to our cluster manager. These configs are in cluster manager under manager apps.
having the same issue after upgrading to 9.3. Running ES version 7.3. Did the upgrade of ES resolve the issue for you. Currently getting no notables   Thanks Damian
Hello Splunkers,     I'm getting proper results without any selction in input dropdown, I can able to download the results of that particular table but when I'm making any selection in dahsboard, s... See more...
Hello Splunkers,     I'm getting proper results without any selction in input dropdown, I can able to download the results of that particular table but when I'm making any selection in dahsboard, since its having the base search, its loading results will all fields in base search rather than the fields mentioned in that table. here is the query, <panel> <title>Raw Data</title> <!-- HTML Panel for Spinner --> <input type="text" token="value" searchWhenChanged="true"> <label>Row Data per Page</label> <default>20</default> <initialValue>20</initialValue> </input> <input type="radio" token="field3" searchWhenChanged="true"> <label>Condition_1</label> <choice value="=">Contains</choice> <choice value="!=">Does Not Contain</choice> <default>=</default> <initialValue>=</initialValue> </input> <input type="text" token="search" searchWhenChanged="true"> <label>All Fields Search_1</label> <default>*</default> <initialValue>*</initialValue> <prefix>"*</prefix> <suffix>*"</suffix> </input> <input type="checkbox" token="field4"> <label>Add New Condition</label> <choice value="0">Yes</choice> </input> <input type="dropdown" token="field5" searchWhenChanged="true" depends="$field4$"> <label>Expression</label> <choice value="AND">AND</choice> <choice value="OR">OR</choice> <default>AND</default> <initialValue>AND</initialValue> </input> <input type="radio" token="field6" searchWhenChanged="true" depends="$field4$"> <label>Condition_2</label> <choice value="=">Contains</choice> <choice value="!=">Does Not Contain</choice> <default>=</default> <initialValue>=</initialValue> </input> <input type="text" token="search2" searchWhenChanged="true" depends="$field4$"> <label>All Fields Search_2</label> <default>*</default> <initialValue>*</initialValue> <prefix>"*</prefix> <suffix>*"</suffix> </input> <html> <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=Event_Logs&amp;outputMode=csv">Download CSV</a> </html> <html depends="$showSpinner3$"> <!-- CSS Style to Create Spinner using animation --> <style> .loadSpinner { margin: 0 auto; border: 5px solid #FFF; /* White BG */ border-top: 5px solid #3863A0; /* Blue */ border-radius: 80%; width: 50px; height: 50px; animation: spin 1s linear infinite; } @keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } } <!-- CSS override to hide default Splunk Search Progress Bar --> #panel1 .progress-bar{ visibility: hidden; } </style> <div class="loadSpinner"/> </html> <table> <search base="base_search_index"> <progress> <!-- Set the token to Show Spinner when the search is running --> <set token="showSpinner3">true</set> </progress> <done> <!-- Unset the token to Hide Spinner when the search completes --> <unset token="showSpinner3"></unset> </done> <query>| sort _time |eval _raw=displayname.","._raw | table _raw | appendpipe [| stats count | where count == 0 | eval _raw="No Data Found for selected time and filters" | table _raw ]</query> <done> <set token="export_sid">$job.sid$</set> </done> </search> <option name="count">$value$</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="_raw"> <colorPalette type="map">{"No Data Found for selected time and filters":#D41F1F}</colorPalette> </format> </table> </panel>
Hello, In Splunk Enterprise security we would like to make it mandatory to define a Notable owner to be able to close a notable. We would like to avoid to have closed notables without assignee/owner... See more...
Hello, In Splunk Enterprise security we would like to make it mandatory to define a Notable owner to be able to close a notable. We would like to avoid to have closed notables without assignee/owner. Is there a way in Splunk Enterprise Security to make the owner required to close a notable ? Than you very much in advance. Happy Splunking. Raphael
It's a bit more complicated than just saying that "search-time extractions are simpler". But "the Splunk way" is to use search-time extractions when possible. That sums it up without getting too deep... See more...
It's a bit more complicated than just saying that "search-time extractions are simpler". But "the Splunk way" is to use search-time extractions when possible. That sums it up without getting too deeply into technical intricacies of the indexing process. And yes, if you don't add fields.conf entries for indexed fields, Splunk won't know that it has to look for indexed fields instead of search-time extracted ones. That's why you wouldn't find your data when you had those TRANSFORMS.
Hello guys, I need a help with a dropdown, basically I have this "Stage" column on Splunk dashboard classic, which I can choose the stage of the data. But when I reload the page or open the d... See more...
Hello guys, I need a help with a dropdown, basically I have this "Stage" column on Splunk dashboard classic, which I can choose the stage of the data. But when I reload the page or open the dashboard on the new tab (Or Log in on another device), it returns to default value, which is Pending. This is the XML and the a.js I use: ------XML------- <dashboard version="1.1" script="a.js"> <label>Audit Progression Tracker</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="field2"> <label>Domain Controller</label> <choice value="dc1">Domain Controller 1</choice> <choice value="dc2">Domain Controller 2</choice> <choice value="dc3">Domain Controller 3</choice> <fieldForLabel>Choose DC</fieldForLabel> </input> </fieldset> <row> <panel> <table id="table_id"> <search> <query> index="ad_security_data" | where status ="failed" | table checklist_name, name, mitigation | eval Stage="Pending" </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </dashboard> ------a.js---------- require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { console.log("Script loaded"); var StageDropdownRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { console.log("Checking cell for Stage column:", cell.field); return cell.field === "Stage"; }, render: function($td, cell) { console.log("Rendering cell for Stage column"); var dropdownHtml = ` <select> <option value="Pending" ${cell.value === "Pending" ? "selected" : ""}>Pending</option> <option value="Proceeding" ${cell.value === "Proceeding" ? "selected" : ""}>Proceeding</option> <option value="Solved" ${cell.value === "Solved" ? "selected" : ""}>Solved</option> </select> `; $td.html(dropdownHtml); updateBackgroundColor($td, cell.value); $td.find("select").on("change", function(e) { console.log("Selected value:", e.target.value); updateBackgroundColor($td, e.target.value); }); } }); function updateBackgroundColor($td, value) { var $select = $td.find("select"); // Chọn dropdown (phần tử <select>) if (value === "Proceeding") { $select.css("background-color", "#FFD700"); } else if (value === "Solved") { $select.css("background-color", "#90EE90"); } else { $select.css("background-color", ""); } } // Lấy bảng và áp dụng custom renderer var table = mvc.Components.get("table_id"); if (table) { console.log("Table found, applying custom renderer"); table.getVisualization(function(tableView) { // Thêm custom cell renderer và render lại bảng tableView.table.addCellRenderer(new StageDropdownRenderer()); tableView.table.render(); }); } else { console.log("Table not found"); } }); All I want it to keep it intact whatever I do and It can turn back to Pending every 8 A.M.  Thanks for the help
Hello Splunker!! Hope all is good. I have created a new role in a splunk. I have added some users to that role. I need to restrict that role user to not be able to see the "All Configuration" o... See more...
Hello Splunker!! Hope all is good. I have created a new role in a splunk. I have added some users to that role. I need to restrict that role user to not be able to see the "All Configuration" option in the settings.  Please help me, what settings should I change to get my results?   What I have did so far, but nothing works for me. [role_Splunk_engineer] list_all_configurations = disabled edit_configurations = disabled Thanks in Advance.