All Topics

Top

All Topics

Hi I am trying to re-set a text box when a button is pressed. However as it using a  <html> button, i just dont know where to start. Any ideas ? Thanks in advance       <panel depends="$... See more...
Hi I am trying to re-set a text box when a button is pressed. However as it using a  <html> button, i just dont know where to start. Any ideas ? Thanks in advance       <panel depends="$host_token$,$functional_validation_token$" id="comment" rejects="$showExpandLink5$"> <title>Update Comments for $script_token$</title> <input type="text" token="comment_token" searchWhenChanged="true"> <label>Comment</label> <default>*</default> <initialValue>*</initialValue> </input> <html> <style>.btn-primary { margin: 5px 10px 5px 0; }</style> <a href="http://$mte_machine$:4444/executeScript/envMonitoring@@qcst_processingScriptsChecks.sh/-updateComment/$runid_token$/$script_token$/$npid_token$/%22$comment_token$%22" target="_blank" class="btn btn-primary" style="height:25px;width:250px;">Submit</a> </html> </panel>      
Hi All, When I change some configs on HF, It seems that I need to restart HF according to the doc below. https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/Configurationfilechangesthatrequire... See more...
Hi All, When I change some configs on HF, It seems that I need to restart HF according to the doc below. https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/Configurationfilechangesthatrequirerestart "If you make a configuration file change to a heavy forwarder, you must restart the forwarder, but you do not need to restart the receiving indexer." Is it true? How to reload changed config without restart? If it is impossible, ingested data with HEC would be lost. What is the workaround?
Hi,   I currently have this search that gets the earliest and latest timestamp of index. But since I am running this search in All time Time range, it is very slow. | tstats earliest(_time) as ear... See more...
Hi,   I currently have this search that gets the earliest and latest timestamp of index. But since I am running this search in All time Time range, it is very slow. | tstats earliest(_time) as earliestTime latest(_time) as latestTime where index=* by index | eval strfearliestTime=strftime(earliestTime,"%Y/%m/%d %H:%M:%S") | eval strflatestTime=strftime(latestTime,"%Y/%m/%d %H:%M:%S") Do you have any other options on getting this information? I also tried using | rest command. But I am not getting the minTime and maxTime I saw on queries that others are using. | rest /services/data/indexes | eval indexSize=tostring(round(currentDBSizeMB/1024,2), "commas"), events=tostring(totalEventCount, "commas"), daysRetention=frozenTimePeriodInSecs/60/60/24 | foreach *Time [| eval <<FIELD>>=strptime(<<FIELD>>,"%Y-%m-%dT%H:%M:%S%Z"), <<FIELD>>=strftime(<<FIELD>>,"%m/%d/%Y %H:%M:%S") ] | fillnull value="n/a" | table title, splunk_server, indexSize, daysRetention, events, maxTime, minTime | rename title as "Index Name", splunk_server as "Splunk Server" indexSize as "Current Size on Disk (GB)", daysRetention as "Retention Period in Days", events as "Count of events", maxTime as "Most Recent Event", minTime as "Earliest Event" Can you please suggest other options? Thank you!
After restarting splunk master node machine (whole machine - there was no update of the splunk software itself, just underlying OS updates) the splunkd process reported on start this: 11-04-2021 11:... See more...
After restarting splunk master node machine (whole machine - there was no update of the splunk software itself, just underlying OS updates) the splunkd process reported on start this: 11-04-2021 11:08:22.831 +0100 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/SplunkEnterpriseSecuritySuite/bin/import_icons_SplunkEnterpriseSecuritySuite.py" /opt/splunk/etc/apps/SplunkEnterpriseSecuritySuite/lib/SplunkEnterpriseSecuritySuite_app_common/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.2) or chardet (4.0.0) doesn't match a supported version! 11-04-2021 11:08:22.831 +0100 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/SplunkEnterpriseSecuritySuite/bin/import_icons_SplunkEnterpriseSecuritySuite.py" RequestsDependencyWarning) What's interesting is that both python packages which are supposedly unsupported come with the ES app itself (are located in /opt/splunk/etc/apps/SplunkEnterpriseSecurity/lib/SplunkEnterpriseSecuritySuite_app_common/packages). I'm not sure if it's the first ever occurence of this error because the only splunk components which have been restarted during the _internal retention period were HF's which obviously don't host ES. I will be restarting my SH's later in the day so I expect the same errors there as well. Has anyone encountered it? Should I simply fill a support ticket? (yes, we're a bit behind since it's a 6.4.1 but I don't see it as any known or fixed later issue)
Hello I need to use two different Machine Agent versions in order to run addpynamics extensions. Instead of installing each agent on separate server, I'm thinking of running each machine agent on a ... See more...
Hello I need to use two different Machine Agent versions in order to run addpynamics extensions. Instead of installing each agent on separate server, I'm thinking of running each machine agent on a separate container. is there a way to install Standalone Machine Agent on a container? thanks
Hi, We use stream app version 8.0.1 and we want to extract x-forwarded-for ip address but in our system we use "Client-IP" header instead of "X-Forwarded-For" header. Splunk stream normally extract ... See more...
Hi, We use stream app version 8.0.1 and we want to extract x-forwarded-for ip address but in our system we use "Client-IP" header instead of "X-Forwarded-For" header. Splunk stream normally extract X-Forwarded-For header. How can we extract Client-IP header information? Thank you for helping Best Regards Mesut
Hi Fellow Splunk Devs, Given the recent news regarding the new JQuery Version 3.5 being added to the Splunk 8.2 release package.  I just have a question for the lower versions of Splunk (8.1 and ... See more...
Hi Fellow Splunk Devs, Given the recent news regarding the new JQuery Version 3.5 being added to the Splunk 8.2 release package.  I just have a question for the lower versions of Splunk (8.1 and lower). Would it be possible to use JQuery 3.5 on my dashboards? If yes, may I ask for the possible ways to perform it? Thanks in advance for your responses
We have used CentOS on some of our splunk servers and now that it has End of Life on December 31, 2021. We are looking to rebuild the servers with a new OS. The new standard from our linux team is Ro... See more...
We have used CentOS on some of our splunk servers and now that it has End of Life on December 31, 2021. We are looking to rebuild the servers with a new OS. The new standard from our linux team is Rocky.  Since Rocky is a relatively new distro we do not have any experience running splunk on this OS. Is there anyone out there that has that experience and can share?
Hi, I have events that have more than 20 lines of data. In the Field extraction menu only the first 20 lines are shown. This prohibits me from extracting fields that are beyond the 20th line. Is the... See more...
Hi, I have events that have more than 20 lines of data. In the Field extraction menu only the first 20 lines are shown. This prohibits me from extracting fields that are beyond the 20th line. Is there a way to show more lines? Can I get the required fileds in another way? My fields all have the same format like: $_NAME: VALUE. There are about 1200 different values in one event. Can I auto extract all fields from my events? (they all have the same sourcetype)
In WildFly application servers there is the /metrics REST endpoint. What is the best way to get the data provided from WildFly /metrics into splunk? What we have found is the "REST API Modular Inpu... See more...
In WildFly application servers there is the /metrics REST endpoint. What is the best way to get the data provided from WildFly /metrics into splunk? What we have found is the "REST API Modular Input" App (https://splunkbase.splunk.com/app/1546/), but this will cost 99$ per connection, and we have 200+ different WildFly Servers  - comparing that to Prometheus + Grafana, which comes for free and has such an API "out of the box".  So it would be hard to justify this solution. But as we already have a splunk environment we would like to keep that, so there must be a better solution for this, which costs less than this REST API Modular Input API". We have a Splunk forwarder on all WildFly servers, so it should be possible to grab the data somehow and push it to splunk. We have also seen the "Splunk Add-on for Java Management Extensions" AddOn, but this seems like re-inventing the wheel, as the data necessary for monitoring is already provided in the /metrics endpoint. And opening a production server for remote JMX access seems odd - as JMX can do anything to that server, not just performance monitoring, which feels like a severe security breach, and JMX security and Beans change from release to release. Who can help?
Hello Splunk Community,  I have created a dashboard with 3 dropdowns; Select System, Select Environment, Select Period (time).  Note: each system has named their environments the same i.e. Producti... See more...
Hello Splunk Community,  I have created a dashboard with 3 dropdowns; Select System, Select Environment, Select Period (time).  Note: each system has named their environments the same i.e. Production, UAT etc. I seem to be having a problem when i have already selected all dropdowns and metrics load, then i change the System dropdown, the Environment dropdown seems to update but there is 1 duplicate (i.e. in pervious search I selected 'Production' environment and now I have 2 Production environments assuming one is for each system).  Can someone assist me to figure out how to clear the environment dropdown when i change the system? I have tried to play around with the settings within the UI but no luck.  Is there something i need to change in my source code?  <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="CMDB_CI_Name" searchWhenChanged="true"> <label>Select IT Services</label> <fieldForLabel>CMDB_CI_Name</fieldForLabel> <fieldForValue>CMDB_CI_Name</fieldForValue> <search> <query>|inputlookup list.csv | fields CMDB_CI_Name | dedup CMDB_CI_Name</query> <earliest>-4h@m</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Select Environment</label> <change> <set token="tokEnvironment">$label$</set> </change> <fieldForLabel>Env_Purpose</fieldForLabel> <fieldForValue>Env_Infra</fieldForValue> <search> <query>|inputlookup list.csv | search CMDB_CI_Name=$CMDB_CI_Name$ | fields Env_Purpose, Env_Infra | dedup Env_Purpose, Env_Infra</query> </search> </input> <input type="time" token="time_token" searchWhenChanged="true"> <label>Time Period</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default>
Hi,   I am trying to construct a report where when the Response time is over a % and how many minutes it has been over in a time range. I can do the stuff part but I am stuck on calculating how man... See more...
Hi,   I am trying to construct a report where when the Response time is over a % and how many minutes it has been over in a time range. I can do the stuff part but I am stuck on calculating how many minutes it has been over a percentage in a time frame.   Any help would be greatly appreciated.   Thanks,   Joe
Hi, 1) Which integration method will be used when data in onboarded with following steps  a) HEC method b)TCP method c) DB connect ? 2) How many API scripts that we can able to run in HF? If po... See more...
Hi, 1) Which integration method will be used when data in onboarded with following steps  a) HEC method b)TCP method c) DB connect ? 2) How many API scripts that we can able to run in HF? If possible can you please suggest any documentation and also the uses for the above methods individually?  
Hi All,   and @dmarling and @efavreau  I have been using the Paychex Cover Your Assets techniques from the 2019 Splunk Conference to export user config and load into Splunk Cloud.  I have used it f... See more...
Hi All,   and @dmarling and @efavreau  I have been using the Paychex Cover Your Assets techniques from the 2019 Splunk Conference to export user config and load into Splunk Cloud.  I have used it for a few sites but with the latest site I have a problem where Alerts defined with Time Range set to Custom have loaded into cloud with Time Range set to "All Time". This will obviously cause a performance problem especially as many alerts run frequesntkly and usually the Time Range is set to 5 minutes. Has anyone else noticed these settings being lost in the Paychex process?   For example this:   has become:   I have checked and can see the that the first Paycheck SPL worked fineas I can find these fields in the resulting csv: But the second Paychex SPL that assembles the CreateCurl has dropped these fields:   curl -k -H "Authorization: Splunk XXXXXXXXXXXXXXXX/servicesNS/nobody/search/saved/searches -d name="AWS ASG ELB Activity" -d search="%28index%3Daws%20OR%20index%3Dclick%29%20sourcetype%3D%22aws%3Acloudtrail%22%20%20userAgent%3D%22autoscaling%2Eamazonaws%2Ecom%22%20accountName%3DProduction%20%20%28eventName%3D%20%20%22DeregisterInstancesFromLoadBalancer%22%20OR%20%20eventName%3D%20%22RegisterInstancesWithLoadBalancer%22%29%7C%20spath%20path%3DrequestParameters%2Einstances%7B%7D%2EinstanceId%20output%3Dinstances%20%20%20%7C%20eval%20slack%5Fmessage%20%3D%20strftime%28%5Ftime%2C%20%22%20%25Y%2D%25m%2D%25d%20%25H%3A%25M%3A%25S%22%29%20%2E%20%22%20autoscaling%20%22%7Ceval%20slack%5Fmessage%20%3D%20slack%5Fmessage%20%2E%20if%28eventName%3D%22RegisterInstancesWithLoadBalancer%22%2C%20%22%20added%20%22%2C%20%22%20removed%20%22%29%20%7Ceval%20instance%5Ftotal%3Dmvcount%28%09%0A%27responseElements%2Einstances%7B%7D%2EinstanceId%27%29%7Ceval%20instance%5Fcount%3Dmvcount%28instances%29%20%7C%20eval%20instance%5Flist%3Dmvjoin%28instances%2C%22%3B%22%29%20%20%7C%20eval%20slack%5Fmessage%20%3D%20slack%5Fmessage%20%2E%20instance%5Fcount%20%2E%20if%28instance%5Fcount%3D1%2C%20%22%20instance%22%2C%20%22%20instances%22%29%20%2E%20if%28eventName%3D%22RegisterInstancesWithLoadBalancer%22%2C%20%22%20to%22%2C%20%22%20from%22%29%20%2E%20%22%20load%20balancer%20%22%20%2E%20%27requestParameters%2EloadBalancerName%27%20%2E%20%22%2C%20new%20instance%20count%20is%20%22%20%2E%20instance%5Ftotal%20%2E%20%22%20%28%22%20%2E%20instance%5Flist%20%2E%22%29%22%20%7C%20table%20%20slack%5Fmessage%20%7Csort%20%2Dslack%5Fmessage" -d description="" -d auto_summarize.cron_schedule="%2A%2F10%20%2A%20%2A%20%2A%20%2A" -d cron_schedule="%2A%2F5%20%2A%20%2A%20%2A%20%2A" -d is_scheduled="1" -d schedule_window="0" -d action.email="0" -d action.email.sendresults="" -d action.email.to="" -d action.keyindicator.invert="0" -d action.makestreams.param.verbose="0" -d action.notable.param.verbose="0" -d action.populate_lookup="0" -d action.risk.param.verbose="0" -d action.rss="0" -d action.script="0" -d action.slack="1" -d action.slack.param.channel="%23digital%2Dprod%2Daudit" -d action.slack.param.message="%24result%2Eslack%5Fmessage%24" -d action.summary_index="0" -d action.summary_index.force_realtime_schedule="0" -d actions="slack" -d alert.digest_mode="0" -d alert.expires="24h" -d alert.managedBy="" -d alert.severity="3" -d alert.suppress="0" -d alert.suppress.fields="" -d alert.suppress.group_name="" -d alert.suppress.period="" -d alert.track="0" -d alert_comparator="greater%20than" -d alert_condition="" -d alert_threshold="0" -d alert_type="number%20of%20events" -d display.events.fields="%5B%22host%22%2C%22source%22%2C%22sourcetype%22%5D" -d display.events.list.drilldown="full" -d display.events.list.wrap="1" -d display.events.maxLines="5" -d display.events.raw.drilldown="full" -d display.events.rowNumbers="0" -d display.events.table.drilldown="1" -d display.events.table.wrap="1" -d display.events.type="list" -d display.general.enablePreview="1" -d display.general.migratedFromViewState="0" -d display.general.timeRangePicker.show="1" -d display.general.type="statistics" -d display.page.search.mode="verbose" -d display.page.search.patterns.sensitivity="0%2E8" -d display.page.search.showFields="1" -d display.page.search.tab="statistics" -d display.page.search.timeline.format="compact" -d display.page.search.timeline.scale="linear" -d display.statistics.drilldown="cell" -d display.statistics.overlay="none" -d display.statistics.percentagesRow="0" -d display.statistics.rowNumbers="0" -d display.statistics.show="1" -d display.statistics.totalsRow="0" -d display.statistics.wrap="1" -d display.visualizations.chartHeight="300" -d display.visualizations.charting.axisLabelsX.majorLabelStyle.overflowMode="ellipsisNone" -d display.visualizations.charting.axisLabelsX.majorLabelStyle.rotation="0" -d display.visualizations.charting.axisLabelsX.majorUnit="" -d display.visualizations.charting.axisLabelsY.majorUnit="" -d display.visualizations.charting.axisLabelsY2.majorUnit="" -d display.visualizations.charting.axisTitleX.text="" -d display.visualizations.charting.axisTitleX.visibility="visible" -d display.visualizations.charting.axisTitleY.text="" -d display.visualizations.charting.axisTitleY.visibility="visible" -d display.visualizations.charting.axisTitleY2.text="" -d display.visualizations.charting.axisTitleY2.visibility="visible" -d display.visualizations.charting.axisX.abbreviation="none" -d display.visualizations.charting.axisX.maximumNumber="" -d display.visualizations.charting.axisX.minimumNumber="" -d display.visualizations.charting.axisX.scale="linear" -d display.visualizations.charting.axisY.abbreviation="none" -d display.visualizations.charting.axisY.maximumNumber="" -d display.visualizations.charting.axisY.minimumNumber="" -d display.visualizations.charting.axisY.scale="linear" -d display.visualizations.charting.axisY2.abbreviation="none" -d display.visualizations.charting.axisY2.enabled="0" -d display.visualizations.charting.axisY2.maximumNumber="" -d display.visualizations.charting.axisY2.minimumNumber="" -d display.visualizations.charting.axisY2.scale="inherit" -d display.visualizations.charting.chart="column" -d display.visualizations.charting.chart.bubbleMaximumSize="50" -d display.visualizations.charting.chart.bubbleMinimumSize="10" -d display.visualizations.charting.chart.bubbleSizeBy="area" -d display.visualizations.charting.chart.nullValueMode="gaps" -d display.visualizations.charting.chart.overlayFields="" -d display.visualizations.charting.chart.rangeValues="" -d display.visualizations.charting.chart.showDataLabels="none" -d display.visualizations.charting.chart.sliceCollapsingThreshold="0%2E01" -d display.visualizations.charting.chart.stackMode="default" -d display.visualizations.charting.chart.style="shiny" -d display.visualizations.charting.drilldown="all" -d display.visualizations.charting.fieldColors="" -d display.visualizations.charting.fieldDashStyles="" -d display.visualizations.charting.gaugeColors="" -d display.visualizations.charting.layout.splitSeries="0" -d display.visualizations.charting.layout.splitSeries.allowIndependentYRanges="0" -d display.visualizations.charting.legend.labelStyle.overflowMode="ellipsisMiddle" -d display.visualizations.charting.legend.mode="standard" -d display.visualizations.charting.legend.placement="right" -d display.visualizations.charting.lineWidth="2" -d display.visualizations.custom.drilldown="all" -d display.visualizations.custom.height="" -d display.visualizations.custom.type="" -d display.visualizations.mapHeight="400" -d display.visualizations.mapping.choroplethLayer.colorBins="5" -d display.visualizations.mapping.choroplethLayer.colorMode="auto" -d display.visualizations.mapping.choroplethLayer.maximumColor="0xaf575a" -d display.visualizations.mapping.choroplethLayer.minimumColor="0x62b3b2" -d display.visualizations.mapping.choroplethLayer.neutralPoint="0" -d display.visualizations.mapping.choroplethLayer.shapeOpacity="0%2E75" -d display.visualizations.mapping.choroplethLayer.showBorder="1" -d display.visualizations.mapping.data.maxClusters="100" -d display.visualizations.mapping.drilldown="all" -d display.visualizations.mapping.legend.placement="bottomright" -d display.visualizations.mapping.map.center="%280%2C0%29" -d display.visualizations.mapping.map.panning="1" -d display.visualizations.mapping.map.scrollZoom="0" -d display.visualizations.mapping.map.zoom="2" -d display.visualizations.mapping.markerLayer.markerMaxSize="50" -d display.visualizations.mapping.markerLayer.markerMinSize="10" -d display.visualizations.mapping.markerLayer.markerOpacity="0%2E8" -d display.visualizations.mapping.showTiles="1" -d display.visualizations.mapping.tileLayer.maxZoom="7" -d display.visualizations.mapping.tileLayer.minZoom="0" -d display.visualizations.mapping.tileLayer.tileOpacity="1" -d display.visualizations.mapping.tileLayer.url="" -d display.visualizations.mapping.type="marker" -d display.visualizations.show="1" -d display.visualizations.singlevalue.afterLabel="" -d display.visualizations.singlevalue.beforeLabel="" -d display.visualizations.singlevalue.colorBy="value" -d display.visualizations.singlevalue.colorMode="none" -d display.visualizations.singlevalue.drilldown="none" -d display.visualizations.singlevalue.numberPrecision="0" -d display.visualizations.singlevalue.rangeColors="%5B%220x53a051%22%2C%20%220x0877a6%22%2C%20%220xf8be34%22%2C%20%220xf1813f%22%2C%20%220xdc4e41%22%5D" -d display.visualizations.singlevalue.rangeValues="%5B0%2C30%2C70%2C100%5D" -d display.visualizations.singlevalue.showSparkline="1" -d display.visualizations.singlevalue.showTrendIndicator="1" -d display.visualizations.singlevalue.trendColorInterpretation="standard" -d display.visualizations.singlevalue.trendDisplayMode="absolute" -d display.visualizations.singlevalue.trendInterval="" -d display.visualizations.singlevalue.underLabel="" -d display.visualizations.singlevalue.unit="" -d display.visualizations.singlevalue.unitPosition="after" -d display.visualizations.singlevalue.useColors="0" -d display.visualizations.singlevalue.useThousandSeparators="1" -d display.visualizations.singlevalueHeight="115" -d display.visualizations.trellis.enabled="0" -d display.visualizations.trellis.scales.shared="1" -d display.visualizations.trellis.size="medium" -d display.visualizations.trellis.splitBy="" -d display.visualizations.type="charting"   I really like this process and am keen to work out a solution but am asking in case someone else has already resolved it. Thanks heaps.
Hello all, I have a saved search that I want to run once every Sunday at 00:00. I have added in the query to pick the events for the last 7 days as: earliest=-7d@d latest=@m. I have also scheduled ... See more...
Hello all, I have a saved search that I want to run once every Sunday at 00:00. I have added in the query to pick the events for the last 7 days as: earliest=-7d@d latest=@m. I have also scheduled it to run every week on Sunday at 00:00 and time range as Last 7 Days.   When I run the saved search manually it is working as expected and also when I run this by changing the schedule to run every 5 mins for last 7 days range it is able to index the data. However, when I schedule it to run once every week, even though the search is running the data is not being indexed to tier3. When I checked the job manager, the run was successfully completed but no data was pushed to tier3.   Can you please help on this.
Splunk 8.x.xを使用していますが、サーチを実行中にsplunkdプロセスが落ちてしまいました。調査をしてみたところ、下記のことがわかりました。 * サーチプロセスではなく、splunkd自体がセグメンテーション違反(SIGSEGV)で落ちている。 * メモリーも枯渇しておらず、oom-killerが発生したという形跡がmessagesファイルから見られない。 * クラッシュログが... See more...
Splunk 8.x.xを使用していますが、サーチを実行中にsplunkdプロセスが落ちてしまいました。調査をしてみたところ、下記のことがわかりました。 * サーチプロセスではなく、splunkd自体がセグメンテーション違反(SIGSEGV)で落ちている。 * メモリーも枯渇しておらず、oom-killerが発生したという形跡がmessagesファイルから見られない。 * クラッシュログがSplunkのログディレクトリ配下に生成されていない。 * コマンドが大量にある、長いサーチ文を実行していたようだ。 なぜsplunkdプロセスが急に落ちてしまったのでしょうか?対処する方法はありますか?
Did anyone implemented Splunk Federated Search feature, if yes can someone please help us with the below issue. We have set up federated search across two splunk cloud instances and developed an ale... See more...
Did anyone implemented Splunk Federated Search feature, if yes can someone please help us with the below issue. We have set up federated search across two splunk cloud instances and developed an alert on our instance 1 SH, when ever the alert condition meets, alerts are not getting triggered and in the job manager we are seeing 0 events for that timeframe, when we open the search and manually hit the search, we are seeing events.   We are seeing the another issue even when we are trying to write the data to a lookup file from other instance using the scheduled search, as there is a data loss while writing to a lookup file.
I have splunk queries that generates 2 different tables having similar fields (METHOD, URI, COUNT). I wanted to do a diff between them based on URI and also the count. Eg: tableA METHOD URI C... See more...
I have splunk queries that generates 2 different tables having similar fields (METHOD, URI, COUNT). I wanted to do a diff between them based on URI and also the count. Eg: tableA METHOD URI COUNT GET 1/0/foo 3 PUT 1/0/bar  11   tableB METHOD URI COUNT GET 1/0/foo 2 PUT 1/0/bar 11 PUT 1/0/buzz 1  Is there a way to do difference between 2 tables based on METHOD+URI and COUNT? Result should be something like  METHOD URI COUNT GET 1/0/foo 1  PUT 1/0/buzz 1 
Hello, I have csv source files without headers; sample events from that file and what PROPS Conf.  I wrote are given below. Values in the first column can be used as time stamps. How would I write P... See more...
Hello, I have csv source files without headers; sample events from that file and what PROPS Conf.  I wrote are given below. Values in the first column can be used as time stamps. How would I write PROPS configuration file  for that csv source file, since I am getting some error messages in timestamps and some extra  columns at the beginning of events.  Any help will be highly appreciated. Thank you so much, Here is what I wrote: [ csv ] SHOULD_LINEMERGE=false NO_BINARY_CHECK=true INDEXED_EXTRACTIONS=csv TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d %H:%M:%S.%6Q FIELD_NAMES=f1,f2,f3,f4,f5,f6,f7,f8   5 Sample Events:    
Hello -  I'm working with a dashboard with a time picker with the token value of $time$.  This time is currently set to the value of another field using: | eval _time = _mytime I have a time... See more...
Hello -  I'm working with a dashboard with a time picker with the token value of $time$.  This time is currently set to the value of another field using: | eval _time = _mytime I have a timechart in a dashboard with the following values: Results in :   | timechart count limit=24 useother=f usenull=f 2021-10-26 1 2021-10-27 417 2021-10-28 36 2021-10-29 15 2021-10-30 21 2021-10-31 3 2021-11-01 10 2021-11-02 3 2021-11-03 1 When I click on a bar in the time chart, for example, the bar for 2021-10-27, I would like my time picker to change to that date, and redraw the dashboard for all the events for that day. I tried setting <drilldown> <set token="time_earliest">$earliest$</set> <set token="time_latest">$latest$</set> </drilldown> I have also tried <drilldown> <set token="fomr.time_earliest">$earliest$</set> <set token="form.time_latest">$latest$</set> </drilldown> Any suggestions?