All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I have a dashboard that show several chart, some of these charts load slowly. 1-is it possible to put radio buttom that have two value like (OFF, ON) whenever user open dashboard won't load th... See more...
Hi I have a dashboard that show several chart, some of these charts load slowly. 1-is it possible to put radio buttom that have two value like (OFF, ON) whenever user open dashboard won't load those chart, when user hit on load them. 2-is it possible to set load priority that first load light chart, after load them completely start to load heavy one!       <panel>       <title>Number of Request Called From Webservice</title>       <chart>         <search>           <query>index="app" "INFO  [APP] [log]"  | rex "status\[(?&lt;status&gt;\w+)" | timechart count(status)  by status usenull=f useother=f limit=0</query>           <earliest>$tokTime.earliest$</earliest>           <latest>$tokTime.latest$</latest>           <sampleRatio>1</sampleRatio>         </search>         <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>         <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>         <option name="charting.axisTitleX.visibility">collapsed</option>         <option name="charting.axisTitleY.visibility">visible</option>         <option name="charting.axisTitleY2.visibility">visible</option>         <option name="charting.axisX.abbreviation">none</option>         <option name="charting.axisX.scale">linear</option>         <option name="charting.axisY.abbreviation">none</option>         <option name="charting.axisY.scale">linear</option>         <option name="charting.axisY2.abbreviation">none</option>         <option name="charting.axisY2.enabled">0</option>         <option name="charting.axisY2.scale">inherit</option>         <option name="charting.chart">area</option>         <option name="charting.chart.bubbleMaximumSize">50</option>         <option name="charting.chart.bubbleMinimumSize">10</option>         <option name="charting.chart.bubbleSizeBy">area</option>         <option name="charting.chart.nullValueMode">gaps</option>         <option name="charting.chart.showDataLabels">minmax</option>         <option name="charting.chart.sliceCollapsingThreshold">0.01</option>         <option name="charting.chart.stackMode">default</option>         <option name="charting.chart.style">shiny</option>         <option name="charting.drilldown">all</option>         <option name="charting.layout.splitSeries">0</option>         <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>         <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>         <option name="charting.legend.mode">standard</option>         <option name="charting.legend.placement">bottom</option>         <option name="charting.lineWidth">2</option>         <option name="height">181</option>         <option name="refresh.display">progressbar</option>         <option name="trellis.enabled">0</option>         <option name="trellis.scales.shared">1</option>         <option name="trellis.size">medium</option>       </chart>     </panel>   </row>   any idea? Thanks
Hi! I would like to separate the field Privilegio   |---------------------------|-------------------------------------------------------------------------------------------|-------------| |   ... See more...
Hi! I would like to separate the field Privilegio   |---------------------------|-------------------------------------------------------------------------------------------|-------------| |   src_user                      |      Privilegio                                                                                                                     |   count      | |---------------------------|-------------------------------------------------------------------------------------------|-------------| |   user-RAC0308$      | SeSecurityPrivilege                                                                                                      |    8127     | |                                          |                                          SeBackupPrivilege                                                               |                    | |                                          |                                          SeRestorePrivilege                                                               |                    | |                                          |                                          SeTakeOwnershipPrivilege                                               |                    | |                                          |                                          SeDebugPrivilege                                                                  |                    | |                                          |                                          SeSystemEnvironmentPrivilege                                      |                    | |                                          |                                          SeLoadDriverPrivilege                                                         |                    | |                                          |                                          SeImpersonatePrivilege                                                      |                    | |                                          |                                          SeDelegateSessionUserImpersonatePrivilege          |                    | |                                          |                                          SeEnableDelegationPrivilege                                            |                    | |                                          |                                          SeCreateTokenPrivilege                                                      |                    | |                                          |                                          SeAssignPrimaryTokenPrivilege                                      |                     | |---------------------------|--------------------------------------------------------------------------------------------|--------------|   Since it only counts the first value and the others are put with a tab, they are the windows privileges of the EventID 4672,  my query is the following: index=oswinsec EventCode=4672 | stats values(PrivilegeList) as Privilegio count by src_user  
Hello Everyone, My aim is to get the dropdown selected value as a token so that I can use it in searchmanager query, also I am using dependent dropdown, so how to pass token of portfolio dropdown... See more...
Hello Everyone, My aim is to get the dropdown selected value as a token so that I can use it in searchmanager query, also I am using dependent dropdown, so how to pass token of portfolio dropdown to application_code dropdown and when portfolio dropdown value changes then application_code dropdown should repopulate and show the data related to current selected portfolio value, also the default value of application_code dropdown should be default. I am attaching the piece of code I am using, could someone please help me and explain how token works in js, I am new in splunk js. xml:   <form script="demo.js"> <label>Demo</label> <row> <panel> <html> <div id="mydropdownview1"/> <div id="mydropdownview2"/> <div id="mytimerangeview"/> <div id="singleid0"/> <div id="singleid1"/> <div id="singleid2"/> <div id="singleid3"/> <div id="singleid4"/> <div id="singleid5"/> <div id="singleid6"/> <div id="singleid7"/> <div id="singleid8"/> <div id="singleid9"/> <div id="singleid10"/> <div id="singleid11"/> </html> </panel> </row> </form>   demo.js   require(["splunkjs/ready!"], function (mvc) { var deps = [ "jquery", "splunkjs/mvc/dropdownview", "splunkjs/ready!", "splunkjs/mvc/searchmanager", "splunkjs/mvc/tableview", "splunkjs/mvc/singleview", "splunkjs/mvc/timerangeview", "splunkjs/mvc" ]; require(deps, function(mvc) { // var randomid = () => Math.random() const searchId = Date.now() + ''; const searchId1 = searchId + Date.now() + ''; const searchId2 = searchId1 + Date.now() + ''; const searchId3 = searchId2 + Date.now() + ''; const searchId4 = searchId3 + Date.now() + ''; const searchId5 = searchId4 + Date.now() + ''; const searchId6 = searchId5 + Date.now() + ''; const searchId7 = searchId6 + Date.now() + ''; const searchId8 = searchId7 + Date.now() + ''; const searchId9 = searchId8 + Date.now() + ''; const searchId10= searchId9 + Date.now() + ''; const searchId11= searchId10 + Date.now() + ''; const dropdownsearch1= searchId11 + Date.now() + ''; const dropdownsearch2= dropdownsearch1 + Date.now() + ''; var SearchManager = require("splunkjs/mvc/searchmanager"); var DropdownView = require("splunkjs/mvc/dropdownview"); var m = require("splunkjs/mvc"); var TimeRangeView = require("splunkjs/mvc/timerangeview"); var mychoices = [ {label:"ALL", value: "*"}, ]; // Access the "default" token model var tokens = m.Components.get("default"); // Retrieve the value of a token $mytoken$ // portfolio_token = tokens.get("portfolio_token"); /**Dropdowns */ var portfolio = new DropdownView({ id: "dropdownid1", managerid: dropdownsearch1, default: "demo_portfolio_value", labelField: "portfolio", valueField: "portfolio", el: $("#mydropdownview1") }).render(); new SearchManager({ id: dropdownsearch1, search: `| inputlookup demo_portfolio_filter.csv | table portfolio | dedup portfolio | sort portfolio` }); // defaultTokenModel.set("portfolio_token", "*"); // var portfolio_dropdown = tokens.get("portfolio_token") var application_code = new DropdownView({ id: "dropdownid2", choices: mychoices, managerid: dropdownsearch2, // default: "ALL", selectFirstChoice:"true", labelField: "application_name", valueField: "application_code", el: $("#mydropdownview2") }).render(); var portfolio_search = new SearchManager({ id: dropdownsearch2, search: `| inputlookup demo_portfolio_filter.csv |search portfolio=${portfolio.val()} |eval application_name=application_code."-".application_name |table application_name application_code |sort application_code` }); // Instantiate a view using the default time range picker var mytimerange = new TimeRangeView({ id: "example-timerange", managerid: "example-search", preset: "-4h@m", el: $("#mytimerangeview") }).render();   var Controlm_NOK = new SearchManager({ id: searchId, label:"NOK Percent Controlm", earliest_time: mytimerange.val().earliest_time, latest_time: mytimerange.val().latest_time, search: `somebasesearch | stats values(percentage) by application_code`, preview: true, autostart: true, cache: true }); /**OK Percent  */ const search1 = new SearchManager({ id:searchId1, label:"OK Percent Controlm", earliest_time: mytimerange.val().earliest_time, latest_time: mytimerange.val().latest_time, search: `somebasesearch | stats values(OK_Percentage) by application_code`, preview: true, autostart: true, cache: true });   var SingleView = require('splunkjs/mvc/singleview'); new SingleView({ id: "single0", managerid: searchId, underLabel:"singleview nok", colorMode: "block", drilldown: "none", rangeColors: "[\"0x6db7c6\",\"0x65a637\",\"0xf7bc38\",\"0xd93f3c\"]", rangeValues: "[0,80,95]", useColors: true, "trellis.enabled": true, "trellis.splitBy": "Location_Name", "trellis.size": "medium", el: $("#singleid0") }).render(); var SingleView = require('splunkjs/mvc/singleview'); new SingleView({ id: "single1", underLabel:"singleview ok", managerid: searchId1, colorMode: "block", drilldown: "none", rangeColors: "[\"0x6db7c6\",\"0x65a637\",\"0xf7bc38\",\"0xd93f3c\"]", rangeValues: "[0,80,95]", useColors: true, "trellis.enabled": true, "trellis.splitBy": "Location_Name", "trellis.size": "medium", el: $("#singleid1") }).render();   }); });        
We are using the latest version of the Splunk Add-On for Salesforce. The integration account we are using on the Salesforce side is setup with the System Administrator profile, which is working fine,... See more...
We are using the latest version of the Splunk Add-On for Salesforce. The integration account we are using on the Salesforce side is setup with the System Administrator profile, which is working fine, but because of the elevated access, we don't want to use that. We have a more limited profile that seems to have all the correct permissions, but when we look at the Event Log data that comes over, it only contains events for the integration account, and no other users. I've compared the profiles and the only difference I see is that Sys Admin has access to all objects. Not sure why that would matter though. Any ideas on what permission we might be missing?
Up to 8.5, I had no problem downloading results.  In 9.0.1, the server returns <response>   <messages>     <msg type="ERROR">Service Unavailable</msg>   </messages> </response> whenev... See more...
Up to 8.5, I had no problem downloading results.  In 9.0.1, the server returns <response>   <messages>     <msg type="ERROR">Service Unavailable</msg>   </messages> </response> whenever I try to export (download), whether from search window or from dashboard.  The problem, it seems, is that  /servicesNS/admin/search/search/jobs/<job id>/results/export invokes a python script that thinks my server_hostname is 127.0.0.1 (localhost) when the server's certification is for the server name. (I am using a publicly signed custom cert.)  Does anyone else get this problem?  How do you fix this? To test, I run a simple search "| tstats count where index=_internal", then click the download/export button.  The server then returns the above error message.  web_service.log shows these errors:   2022-11-18 21:24:52,377 INFO [6377f8245c7fc3f4089c10] startup:139 - Splunk appserver version=9.0.1 build=82c987350fde isFree=True isTrial=False 2022-11-18 21:24:52,415 ERROR [6377f8245c7fc3f4089c10] __init__:868 - Socket error communicating with splunkd (error=[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106)), path = /servicesNS/admin/search/search/jobs/1668806682.36752/results/export?output_mode=csv&f=count&output_time_format=%25Y-%25m-%25dT%25H%3A%25M%3A%25S.%25Q%2B0000 2022-11-18 21:24:52,416 ERROR [6377f8245c7fc3f4089c10] decorators:318 - Splunkd daemon is not responding: ("Error connecting to /servicesNS/admin/search/search/jobs/1668806682.36752/results/export?output_mode=csv&f=count&output_time_format=%25Y-%25m-%25dT%25H%3A%25M%3A%25S.%25Q%2B0000: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106)",) Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 850, in streamingRequest conn.connect() File "/opt/splunk/lib/python3.7/http/client.py", line 1451, in connect server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 878, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 304, in handle_exceptions return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-1471>", line 2, in getJobAsset File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 359, in apply_cache_headers response = fn(self, *a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/search.py", line 392, in getJobAsset return self.streamJobExport(job, asset, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/search.py", line 154, in streamJobExport stream = rest.streamingRequest(uri, getargs=getargs, postargs=postargs, timeout=export_timeout) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 869, in streamingRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ("Error connecting to /servicesNS/admin/search/search/jobs/1668806682.36752/results/export?output_mode=csv&f=count&output_time_format=%25Y-%25m-%25dT%25H%3A%25M%3A%25S.%25Q%2B0000: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106)",)   Given no such problem before Splunk 9, I surmise that Splunk 9 added some server_name configuration for download that is not in my localization.  But I cannot find any.  My local configs are server.conf - which contains serverName property that points to my server's name.   [general] serverName = <my server domain> pass4SymmKey = <some key> [sslConfig] sslVerifyServerCert = true #cliVerifyServerName = true # SSL settings sslPassword = <some password> serverCert = /var/opt/<some file path>.crt caCertFile = /etc/pki/tls/certs/ca-bundle.crt   web.conf - contains no name, so I assume it uses serverName from server.conf   [settings] httpport = 443 enableSplunkWebSSL = true privKeyPath = /var/opt/<some file path>.key serverCert = /var/opt/<some file path>.crt  
I have this query index = tenable sourcetype="tenable:io:vuln" state!=fixed eventtype="*" | dedup dns_name plugin.id | eval discovery = strptime(last_found, "%Y-%m-%dT%H:%M:%S.%3N%Z") - strptime(fi... See more...
I have this query index = tenable sourcetype="tenable:io:vuln" state!=fixed eventtype="*" | dedup dns_name plugin.id | eval discovery = strptime(last_found, "%Y-%m-%dT%H:%M:%S.%3N%Z") - strptime(first_found, "%Y-%m-%dT%H:%M:%S.%3N%Z") | eval Age = round(discovery / 86400, 2) | eval first_found=strftime(strptime(first_found,"%Y-%m-%dT%H:%M:%S.%3N"),"%d-%B-%y") | eval last_found=strftime(strptime(last_found,"%Y-%m-%dT%H:%M:%S.%3N"),"%d-%B-%y") | table plugin.id dns_name first_found last_found Age check_type category severity I am trying to create a trending chart that shows the number of plugin.id  by week for the past 30 days.
Normally, for splunk  dashboard we will save it with file extension .xml. And we will promote the changes via git  Here i want to know about the splunk dashbaord studios, what is the file extens... See more...
Normally, for splunk  dashboard we will save it with file extension .xml. And we will promote the changes via git  Here i want to know about the splunk dashbaord studios, what is the file extension, and once after creating the dashbaord via UI , i want to promote those dashbaords. how can i do that ??????
Paranumber    Name 95929              Magnolia Jones Sr. 35716              Leslie Streich 99265              Magnolia Jones Sr. 152743            Kacey Cartwright 99265              Terenc... See more...
Paranumber    Name 95929              Magnolia Jones Sr. 35716              Leslie Streich 99265              Magnolia Jones Sr. 152743            Kacey Cartwright 99265              Terence Deckow 95929              Magnolia Jones Sr. 131568            Dr. Ubaldo O'Kon 95929              Miss Maegan Adams 95929              Magnolia Jones Sr. 110231            Charley Casper How can i remove duplicates only where the two columns natch.  for example,  95929              Magnolia Jones Sr. I want to remove duplicate of the entire row not by columns but by columns.  
Splunk_TA_stream and UF are installed on server with MySQL 8. Splunk App for stream can recognize traffic with protocol_stack ip:tcp:mysql.  However it can't recognize traffic with protocol_stack  ... See more...
Splunk_TA_stream and UF are installed on server with MySQL 8. Splunk App for stream can recognize traffic with protocol_stack ip:tcp:mysql.  However it can't recognize traffic with protocol_stack  ip:tcp:ssl:mysql. is there any workaroud how to see SELECT query in captured traffic with protocol_stack  ip:tcp:ssl:mysql ?
Hi Community,   I have a use case where the client needs data to be stored over an extended period of time. The main objective is to test if I can use a combination of SSD and HDD for this use ca... See more...
Hi Community,   I have a use case where the client needs data to be stored over an extended period of time. The main objective is to test if I can use a combination of SSD and HDD for this use case. Also, make sure that when the data is available in the HDD over the Network File System, the dashboards powered by data models work. Since the client wants data to be available for at least 6 months, an index is created that has hot/warm buckets in SSD and cold buckets in slower storage connected over the Network File System. For this test, I have created a new index that has a bucket size of 500 Mb and the number of hot buckets is 2 and there are no warm buckets. I have made no other changes to the default configurations. When the data is available in hot/warm buckets, the data models work and the panels are loaded. But when the data is moved to HDD, the panels don't work.  I get an error as attached above. When I run a search query I can see data. Could you please let me know how can I fix this issue?   Regards, Pravin    
I am trying to compare a static column(Baseline) with multiple columns(hosts) and if there is a difference I need to highlight that cell in red Component   BASELINE HOSTA HOSTB ... See more...
I am trying to compare a static column(Baseline) with multiple columns(hosts) and if there is a difference I need to highlight that cell in red Component   BASELINE HOSTA HOSTB HOSTC GPU 20 20 5 7 GPU1 5 7 7 5 FW 2.4.2  2.4.2  2.4.2  2.4.3 IP 1.1.1.1 1.1.1.2 1.1.1.1 1.1.1.1 ID [234 , 336] [234 , 336] [134 , 336] [234 , 336]         <form theme="dark"> <label>Preos Firmware Summary - Liquid Cooled</label> <fieldset submitButton="false"> <input type="multiselect" token="tok_host" searchWhenChanged="true"> <label>Host</label> <valueSuffix>,</valueSuffix> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>index=preos_inventory sourcetype = preos_inventory Type=Liquid_Cooled | stats count by host | dedup host</query> <earliest>-90d@d</earliest> <latest>now</latest> </search> <default>*</default> <delimiter> </delimiter> <choice value="*">All</choice> </input> <input type="multiselect" token="tok_component" searchWhenChanged="true"> <label>Component</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>Component</fieldForLabel> <fieldForValue>Component</fieldForValue> <search> <query>index=preos_inventory sourcetype = preos_inventory Type=Liquid_Cooled host IN ($tok_host$) "IB HCA FW" OR *CPLD* OR BMC OR SBIOS OR *nvme* OR "*GPU* PCISLOT*" OR *NVSW* | rex field=_raw "log-inventory.sh\[(?&lt;id&gt;[^\]]+)\]\:\s*(?&lt;Component&gt;[^\:]+)\:\s*(?&lt;Hardware_Details&gt;.*)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*CPLD\:\s*(?&lt;Hardware&gt;[^.*]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*BMC\:\s*version\:\s*(?&lt;Hardware1&gt;[^\,]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*SBIOS\s*version\:\s*(?&lt;Hardware2&gt;[^ ]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*nvme\d*\:.*FW\:\s*(?&lt;Hardware3&gt;[^ ]+)" | rex field=_raw "VBIOS\:\s*(?&lt;Hardware4&gt;[^\,]+)" | rex field=_raw "NVSW(\d\s|\s)FW\:\s*(?&lt;Hardware5&gt;(.*))" | rex field=_raw "IB\s*HCA\sFW\:\s*(?&lt;Hardware6&gt;(.*))" | eval output = mvappend(Hardware,Hardware1,Hardware2,Hardware3,Hardware4,Hardware5,Hardware6) | replace BMC WITH "BMC and AUX" in Component | search Component IN("*") | stats latest(output) as output latest(_time) as _time by Component host | fields - _time | eval from="search" | join Component [| inputlookup FW_Tracking_Baseline.csv | search Component!=*ERoT* Component!=PCIeRetimer* Component!="BMC FW ver" | table Component Baseline | eval from="lookup" | rename Baseline as lookup_output | fields lookup_output Component output] | stats count(eval(lookup_output==output)) AS case BY host Component output lookup_output | replace 1 WITH "match" IN case | replace 0 WITH "No match" IN case | stats values(Component) as Component by host lookup_output case output | stats count by Component | dedup Component</query> <earliest>-90d@d</earliest> <latest>now</latest> </search> <valueSuffix>"</valueSuffix> <delimiter> ,</delimiter> <valuePrefix>"</valuePrefix> </input> </fieldset> <row> <panel> <table> <search> <query>index=preos_inventory sourcetype = preos_inventory Type=Liquid_Cooled host IN ($tok_host$) "IB HCA FW" OR *CPLD* OR BMC OR SBIOS OR *nvme* OR "*GPU* PCISLOT*" OR *NVSW* | rex field=_raw "log-inventory.sh\[(?&lt;id&gt;[^\]]+)\]\:\s*(?&lt;Component&gt;[^\:]+)\:\s*(?&lt;Hardware_Details&gt;.*)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*CPLD\:\s*(?&lt;Hardware&gt;[^.*]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*BMC\:\s*version\:\s*(?&lt;Hardware1&gt;[^\,]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*SBIOS\s*version\:\s*(?&lt;Hardware2&gt;[^ ]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*nvme\d*\:.*FW\:\s*(?&lt;Hardware3&gt;[^ ]+)" | rex field=_raw "VBIOS\:\s*(?&lt;Hardware4&gt;[^\,]+)" | rex field=_raw "NVSW\d\s*FW\:\s*(?&lt;Hardware5&gt;(.*))" | rex field=_raw "IB\s*HCA\sFW\:\s*(?&lt;Hardware6&gt;(.*))" | eval output = mvappend(Hardware,Hardware1,Hardware2,Hardware3,Hardware4,Hardware5,Hardware6) | replace BMC WITH "BMC and AUX" in Component | stats latest(output) as output latest(_time) as _time by Component host | eval from="search" | fields - _time | chart values(output) by Component host limit=0 | fillnull value="No Data" | join Component [ | inputlookup FW_Tracking_Baseline.csv | search Component!=*ERoT* Component!=PCIeRetimer* Component!="BMC FW ver" | table Component Baseline | eval from="lookup" | fields Baseline Component output] | fields Component Baseline * | fillnull value="No Data"</query> <earliest>-90d@d</earliest> <latest>now</latest> </search> <option name="count">50</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>       Thanks in Advance
Hello Splunk Community I have a python script that checks a certain family of cisco devices that tells me if the Device is UP or DOWN. The script is based on a csv file that has hostname an... See more...
Hello Splunk Community I have a python script that checks a certain family of cisco devices that tells me if the Device is UP or DOWN. The script is based on a csv file that has hostname and IP. The file is not really subject to change, but can be changed easily if required. I wish I could use the Splunk SNMP module, but I need some sort of API key (BaboonBones!??!) I can use the script outside of splunk to create a “log” file then have splunk read the file. Maybe that is the best way, I am wondering if it is worthwhile to try to find the splunk python splunklib.client module and use it to send data, etc. I am open to suggestions. Thanksl, eholz1
What is the difference between the rules engine and aggregation policies in ITSI?
Do you have to switch between products? or can you stick with ITSI the whole way?
Which product(s) would you use to detect, triage, and act on privilege escalation? and how would you then proceed in doing so?
Which product(s) would you use to detect, triage, and act on phishing?
We are creating a custom action when an itsi event happens based on the CustomGroupActionBase as documenten here. However I cant find anywhere what data is expected to be returned when caling the get... See more...
We are creating a custom action when an itsi event happens based on the CustomGroupActionBase as documenten here. However I cant find anywhere what data is expected to be returned when caling the get_group method. When looking at the docs it says:   get_group() Gets the episode that triggered the custom action. This method relies on get_results_file() and expects the returned file path to be a .csv.gz format.   The documentation of get_results_file says: get_results_file() Gets the results file, which is where results are temporarily stored.   We want to make sure the fields we currently see in the dict that is returned by get_group doesnt change, even better if we understand which file/where the data is coming from. We are afraid we use fields that are not always filled which would result in an error in our code.  
Hi, I got an issue configuring alert manager app. The incident posture filter is not working. It does not matter what info I change inside the red box. The alerts that are shown below does n... See more...
Hi, I got an issue configuring alert manager app. The incident posture filter is not working. It does not matter what info I change inside the red box. The alerts that are shown below does not change at all. Any help will be grateful.
Issue:  Phantom Add-on for Splunk – is not saving any changes done on Saved searches and below error is observed in logs internally. Error observed in Internal logs :  2022-11-17 17:19:19,970 +0000 ... See more...
Issue:  Phantom Add-on for Splunk – is not saving any changes done on Saved searches and below error is observed in logs internally. Error observed in Internal logs :  2022-11-17 17:19:19,970 +0000 ERROR phantom_splunk:188 - Traceback (most recent call last): File "/opt/splunk/etc/apps/phantom/bin/phantom_splunk.py", line 182, in rest response, content = splunk.rest.simpleRequest(path, **args) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 648, in simpleRequest raise splunk.AuthorizationFailed(extendedMessages=uri) splunk.AuthorizationFailed: [HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8089/servicesNS/nobody/phantom/configs/conf-phantom?count=-1&output_mode=json Observations :   Splunk Prod to phantom integrations are intact and I did successfully push notable to Prod during troubleshooting. Splunk Cloud was recently updated to 9.0 Splunk Enterprise 9.0 is compatible with current Phantom App version 4.1.73 installed. I tested with highest Splunk permissions and still unable to save a forwarding search or edit it.
Hi, Good day to you! I quickly wanted to understand whether the Splunk notables will reflect with delay in timestamp on incident dashboard when they moved from "Dev" to "Prod" stage? I often se... See more...
Hi, Good day to you! I quickly wanted to understand whether the Splunk notables will reflect with delay in timestamp on incident dashboard when they moved from "Dev" to "Prod" stage? I often see bulk notables triggering with lag in time (assume today is 18th, alerts reflect with 17th or before dates) whenever SOC team pushes new use-case to Production queue (status: new) Happy to know some context/knowledge around this Cheers,