All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@kiran_panchavat  This explains and confirms the issue that we do have multiple events in index but does not explain the steps to fix this. Let me know if I'm missing something.
Thank you for your response. Please may I know what would be the solution.
Please share the search which is giving you these results
OK Now I understand what you mean - you could try creating a dashboard and schedule that as a PDF delivery - iirc this has to be Classic not Studio
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions sh... See more...
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions should give you the MAX value of a set of values, so in a thought scenario if you have 100 requests during 1 second the p95 should retrieve 95 of the requests with the lowest response time and out of these 95 requests it will pick out the highest response time as the p95 value. A thought scenario would be that the response time value of these 95 request were in the range of 50ms to 300ms. The p5 value would then be 300ms. I've used searches with p95 and p99 and thought this was correct but looking at the events I get out of both p95 and p99 the response time does not make any sense as this "300ms" value cannot be found, and very often I cannot find any close value to this number at all. Anyone that could enligthen me here in relation to the output I'm getting? Example of search: index=test host=server sourcetype=app_httpd_access AND "example" | bin _time span=1s | stats p99(A_1) as RT_p99_ms p95(A_1) as RT_p95_ms count by _time | eval RT_p95_ms=round(RT_p95_ms/1000,2) | eval RT_p99_ms=round(RT_p99_ms/1000,2)   p95 value output: 341,87ms Total number of values returned during 1 second for p95: 15 Response time output in ms (I was expecting value 341,87 on the TOP here but it's not present) : 343,69 330,675 329,291 301,369 279,018 246,719 106,387 103,216 100,232  44,794 44,496 42,491 38,974 38,336 34,201
@architkhanna Hello, can you please go through this link Solved: Why are there many duplicate events in the indexer... - Splunk Community
Use path parameter/argument in spath to lock in a JSON array.   | spath path=content."List of Batches Processed"{} | mvexpand content."List of Batches Processed"{} | spath input=content."List of Ba... See more...
Use path parameter/argument in spath to lock in a JSON array.   | spath path=content."List of Batches Processed"{} | mvexpand content."List of Batches Processed"{} | spath input=content."List of Batches Processed"{} | fields - _* content.*   Note your sample data is non-compliant.  Correcting for syntax, it should give P_BATCH_ID P_MESSAGE P_MORE_BATCHES_EXISTS P_PERIOD P_REQUEST_ID P_RETURN_STATUS P_TEMPLATE P_ZUORA_FILE_NAME 1 Data loaded in RevPro Successfully - Success: 10000 Failed: 0 Y 24 177 SUCCESS Template Template20240306102852.csv 2 Data loaded in RevPro Successfully - Success: 10000 Failed: 0 Y 24 1r7 SUCCESS Template Template20240306102852.csv 3 Data loaded in RevPro Successfully - Success: 10000 Failed: 0 Y 24 1577 SUCCESS Template Template20240306102852.csv 4 Data loaded in RevPro Successfully - Success: 10000 Failed: 0 Y 24 16577 SUCCESS Template Template20240306102852.csv Here is an emulation of the compliant JSON.   | makeresults | eval _raw = "{\"content\" : { \"List of Batches Processed\" : [ { \"P_REQUEST_ID\" : \"177\", \"P_BATCH_ID\" : \"1\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1r7\", \"P_BATCH_ID\" : \"2\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1577\", \"P_BATCH_ID\" : \"3\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"16577\", \"P_BATCH_ID\" : \"4\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }] } }" ``` data emulation above ```  
Hi, Thank you for your comment and i got your point. Can you please provide me the process/steps for the below question? How to consolidate the Thousand Eyes alert going to Splunk so we will m... See more...
Hi, Thank you for your comment and i got your point. Can you please provide me the process/steps for the below question? How to consolidate the Thousand Eyes alert going to Splunk so we will monitor only one dashboard in Splunk? Thanks
Also I get no count value. I need number of logins per user and the status of each login E.g. Username, Status Logins xx@xx.xx   success  5                        failed        2  yy@yy.yy   su... See more...
Also I get no count value. I need number of logins per user and the status of each login E.g. Username, Status Logins xx@xx.xx   success  5                        failed        2  yy@yy.yy   success  2                       failed        4
Hi this is a community where volunteers give some helps and hints to other on their spare time. Please don’t tag any names when you are asking help! If you need some help, try to describe your issue... See more...
Hi this is a community where volunteers give some helps and hints to other on their spare time. Please don’t tag any names when you are asking help! If you need some help, try to describe your issues as clearly as possible and give to us examples, sample data, your SPL etc. and remember we are volunteers which loves to help other splunk users, but we are not here to do your job! r. Ismo
我現在遇到一個問題,我在SH放置好一個apps並連到uf上,在uf上也有監控到資料路徑, 但我在search時就沒有辦法找 以下是我的 inputs.conf:     [monitor:///tutorialdata/www*/access.log] index = web host_segment=2 sourcetype = web:access [monitor:///tuto... See more...
我現在遇到一個問題,我在SH放置好一個apps並連到uf上,在uf上也有監控到資料路徑, 但我在search時就沒有辦法找 以下是我的 inputs.conf:     [monitor:///tutorialdata/www*/access.log] index = web host_segment=2 sourcetype = web:access [monitor:///tutorialdata/www*/secure.log] index = web host_segment=2 sourcetype = web:secure   以及props.conf   EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) [web:secure] EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)      
This gets even more confusing.  What does monitoring something in one dashboard (as opposed to what?) have to do with "fixing (something) vulnerabilities" in the OP?  What does "consolidate" mean?  I... See more...
This gets even more confusing.  What does monitoring something in one dashboard (as opposed to what?) have to do with "fixing (something) vulnerabilities" in the OP?  What does "consolidate" mean?  I begin to suspect that you are asking about some specialized Splunk app, not about Splunk security/Splunk vulnerability.
Hi Team, How to consolidate the Thousand Eyes alert going to Splunk so we will monitor only one dashboard in Splunk? Please provide me the process/steps. Thank you.
Hi All, I have a splunk cluster environment where, while pulling data from a source, itgets indexed twice, not as a separate event, but within same event. So all fields have same value coming twic... See more...
Hi All, I have a splunk cluster environment where, while pulling data from a source, itgets indexed twice, not as a separate event, but within same event. So all fields have same value coming twice , making it a multivalue field. Same source code works fine on a standalone splunk server but fails on a cluster.  I have tried to have props.conf present only in data app of indexer , however, with with that field extraction does not happen. If I keeps props.conf in both HF and data app, field extraction happens but with above issue. Would appreciate if anyone has any lead on this. TIA.
Thanks in Advance. 1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But ... See more...
Thanks in Advance. 1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But in the "content.List of Batches Processed{}.BatchID" we have 134 records. So i want to extract the multiple JSON values as field.From below logs i want to extract all the values from P_REQUEST_ID,P_BATCH_ID,P_TEMPLATE Query i tried to fetch the data | eval BatchID=spath("content.List of Batches Processed{}*", "content.List of Batches Processed{}.P_BATCH_ID"),Request=spath(_raw, "content.List of Batches Processed{}.P_REQUEST_ID")|table BatchID Request     "content" : { "List of Batches Processed" : [ { "P_REQUEST_ID" : "177", "P_BATCH_ID" : "1", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1r7", "P_BATCH_ID" : "2", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1577", "P_BATCH_ID" : "3", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "16577", "P_BATCH_ID" : "4", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }     . 
Thanks for the response yuanliu, much appreciated, and sorry for the confusion. You're right that those fields should match up - it should look like the following: - id: extract-audit-group ... See more...
Thanks for the response yuanliu, much appreciated, and sorry for the confusion. You're right that those fields should match up - it should look like the following: - id: extract-audit-group type: regex_parser regex: '\"resourceGroup\"\:\"(?P<extracted_group>[^\"]+)\"' - id: filter-group type: filter expr: 'attributes.extracted_group == "batch"' - id: remove-extracted-group type: remove field: attributes.extracted_group   The Id field can be named just about anything, so difference among names there doesn't matter. We've gone through quite a few iterations of testing which is why there was a discrepancy there. What we have narrowed down the problem in our testing is either the camelCase is causing a regex issue with the field, or special characters within a value are causing an issue (or both, my hunch is that it is the camelCase, but we haven't had success with either).  Putting these results into a regex RE2 parser gets the results we expect, but not with the actual deployed OTEL.
Yes, in a classic dashboard column chart, the fields will be stacked top-down based on their table order, left-to-right, so work_hours is stacked on top of slack_hours to give the effect of a vertica... See more...
Yes, in a classic dashboard column chart, the fields will be stacked top-down based on their table order, left-to-right, so work_hours is stacked on top of slack_hours to give the effect of a vertical offset from 0. If we want to use a classic trellis layout to split by employee ID as shown below, we'll need to cheat by giving fields names that can be lexicographically sorted in our preferred order. To generate eventNN fields from event data, we can count events with streamstats and generate a field name from the count. The exact numbering and ordering of the eventNN field doesn't matter; the fields just need to be unique: | streamstats count | eval event{count}=value Let's normalize and extend the sample data in your chart by employee_id and separate work schedules from events, where date is an epoch date and start_time and end_time are epoch date and times in a schedule lookup named intidev_work_schedules.csv: employee_id,date,start_time,end_time 123,1709510400,1709560800,1709596800 123,1709683200,1709683200,1709722800 123,1709769600,1709802000,1709841600 123,1709856000,1709888400,1709928000 123,1710028800,1710061200,1710100800 456,1709596800,1709625600,1709658000 456,1709683200,1709712000,1709744400 456,1709769600,1709798400,1709830800 456,1709856000,1709884800,1709917200 456,1709942400,1709971200,1710003600 and _time is an epoch date and time in event data with varying employee_id values: | makeresults format=csv data="_time,employee_id,message 1709593200,123,Lorem ipsum 1709672400,123,dolor sit amet 1709676000,456,onsectetur adipiscing elit 1709679600,123,sed do eiusmod 1709694000,456,tempor incididunt 1709722800,123,ut labore et dolore 1709816400,123,Ut enim ad minim veniam 1709823600,456,quis nostrud exercitation 1709906400,123,ullamco laboris nisi 1709910000,456,ut aliquip ex ea 1709913600,123,commodo consequat 1710086400,123,Duis aute irure 1710090000,456,dolor in reprehenderit" | streamstats count | eval date=86400*floor(_time/86400), event{count}=(_time-date)/3600 | lookup intidev_work_schedules.csv employee_id date | inputlookup append=t intidev_work_schedules.csv | eval slack_hours=(start_time-date)/3600, work_hours=(end_time-start_time)/3600, _time=coalesce(_time, start_time) | chart values(work_hours) as "00_work_hours" values(slack_hours) as "01_slack_hours" values(event*) as "02_event*" over _time span=1d by employee_id Work schedules could be imported from an ERP, WFM, or related system. Event data can come from any source, e.g. badge scanners, call managers, Windows security event logs, etc. Visualized in a classic dashboard: <dashboard version="1.1" theme="light"> <label>intidev_trellis_schedule</label> <search id="base"> <query>| makeresults format=csv data="_time,employee_id,message 1709593200,123,Lorem ipsum 1709672400,123,dolor sit amet 1709676000,456,onsectetur adipiscing elit 1709679600,123,sed do eiusmod 1709694000,456,tempor incididunt 1709722800,123,ut labore et dolore 1709816400,123,Ut enim ad minim veniam 1709823600,456,quis nostrud exercitation 1709906400,123,ullamco laboris nisi 1709910000,456,ut aliquip ex ea 1709913600,123,commodo consequat 1710086400,123,Duis aute irure 1710090000,456,dolor in reprehenderit" | streamstats count | eval date=86400*floor(_time/86400), event{count}=(_time-date)/3600 | lookup intidev_work_schedules.csv employee_id date | inputlookup append=t intidev_work_schedules.csv | eval slack_hours=(start_time-date)/3600, work_hours=(end_time-start_time)/3600, _time=coalesce(_time, start_time) | chart values(work_hours) as "00_work_hours" values(slack_hours) as "01_slack_hours" values(event*) as "02_event*" over _time span=1d by employee_id</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <search base="base"> <query>| fieldsummary | fields field | search field=02_* | rex field=field "(?&lt;field&gt;[^:]+)" | mvcombine field | eval field=mvjoin(field, ",") </query> <done> <set token="overlayFields">$result.field$</set> </done> </search> <row> <panel> <html> <style> #columnChart1 .highcharts-series.highcharts-series-1.highcharts-column-series { opacity: 0 !important; } </style> </html> <chart id="columnChart1"> <search base="base"/> <option name="charting.axisLabelsY.majorTickVisibility">show</option> <option name="charting.axisLabelsY.majorUnit">1</option> <option name="charting.axisLabelsY.minorTickVisibility">hide</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.text">Hour</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.includeZero">1</option> <option name="charting.axisY.maximumNumber">24</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">$overlayFields$</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"00_work_hours": 0xc6e0b4}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.placement">none</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">large</option> <option name="trellis.splitBy">employee_id</option> </chart> </panel> </row> </dashboard> I've used a post-process search and event handler to define a token named $overlayFields$ that will dynamically set the charting.chart.overlayFields option. Note that I haven't correctly handled schedules that cross day boundaries. @Richfez's timeline example handles this nicely, but when using a column chart, you'll need to calculate boundaries and new events in SPL using e.g. eval and mvexpand. I don't use Dashboard Studio as often as I use Simple XML. Trellis mode and inline CSS overrides have limited or no support in Dashboard Studio, and feature parity between Splunk Cloud and Splunk Enterprise varies.
@gcusello  Hi Giuseppe, thanks for the guidance! As you can tell I am a newbie here   Actually I did posted a new question here https://community.splunk.com/t5/Getting-Data-In/How-to-forward-only... See more...
@gcusello  Hi Giuseppe, thanks for the guidance! As you can tell I am a newbie here   Actually I did posted a new question here https://community.splunk.com/t5/Getting-Data-In/How-to-forward-only-Windows-events-XML-to-a-3rd-party-system/td-p/680458. I was struggling and saw your Q/A. I understand filtering at forwarder is not a good idea. In any case I've figured out how exactly to filter things out in Splunk Server so my 3rd party partner would get XmlWinEvtLog messages only. Thanks again! Billy
Hello All, I am getting the message in SH The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://x.x.x.x:8089, SH is trying to co... See more...
Hello All, I am getting the message in SH The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://x.x.x.x:8089, SH is trying to connect to the CM that is Standy. We have 2 CM, one of the CM is active and other one is StandBy, below are the configs in CM and SH. Could you please advice if anything is wrong in the config. Thanks! CM Config - [clustering] mode = manager manager_switchover_mode = auto manager_uri = clustermanager:dc1,clustermanager:dc2 pass4SymmKey = key multisite = true available_sites = site1,site2 site_replication_factor = origin:1,site1:1,site2:1,total:3 site_search_factor = origin:1,site2:1,site1:1,total:3 cluster_label = publisher_cluster access_logging_for_heartbeats = 0 cm_heartbeat_period = 3 precompress_cluster_bundle = 1 rebalance_threshold = 0.9 [clustermanager:dc1] manager_uri = https://x.x.x.x:8089 [clustermanager:dc2] manager_uri = https://x.x.x.x:8089 -- SH Config - [general] serverName = x.com pass4SymmKey = key site = site2 [clustering] mode = searchhead manager_uri = clustermanager:dc1, clustermanager:dc2 [clustermanager:dc1] multisite = true manager_uri = https://x.x.x.x:8089 pass4SymmKey = key [clustermanager:dc2] multisite = true manager_uri = https://x.x.x.x:8089 pass4SymmKey = key [replication_port://9100] [shclustering] conf_deploy_fetch_url = https://x.x.x.x:8089 mgmt_uri = https://x.x.x.x:8089 disabled = 0 pass4SymmKey = key replication_factor = 2 shcluster_label = publisher_shcluster id = B57109F1-5D63-4FC9-9BFC-BE6B0375D9A7 manual_detention = off Dhana
I've setup Splunk enterprise as a trial in a test domain however im having issues importing logs from different remote sources. Firstly it says connect to an LDAP before importing remote data. Tried ... See more...
I've setup Splunk enterprise as a trial in a test domain however im having issues importing logs from different remote sources. Firstly it says connect to an LDAP before importing remote data. Tried this however it wont connect to the domain, too many fields in here to fill in without giving examples. "Could not find userBaseDN on the LDAP server". I tried installing the Splunk forwarder on a Windows based DC and set the Splunk server forwarding and receiving to receive from port 9997. Then tried importing the host again and keep getting errors about WMI classes from host blah blah. Where is the documentation on setting up WMI for different remote sources? This piece should be easy. God help me when i try to add logs from networking devices. Real answers only please, no time wasters. Cheers,