All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The old connector didn't support Db2 on Z.   Wondering if the latest version in Splunk base now supports mainframe Db2 on z/OS.   Thanks.
Hi @avikc100  Were you able to try my previous example (see below)? If there is an issue with this i'd be happy to try and resolve for you. Thanks @livehybrid wrote: Hi @avikc100  You can cre... See more...
Hi @avikc100  Were you able to try my previous example (see below)? If there is an issue with this i'd be happy to try and resolve for you. Thanks @livehybrid wrote: Hi @avikc100  You can create a search that calculates the relevant dates which set tokens and then use the tokens: <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%d/%m/%Y") | eval dayMinus1=strftime(now()-86400, "%d/%m/%Y") | eval dayMinus2=strftime(now()-(86400*2), "%d/%m/%Y") | eval dayMinus3=strftime(now()-(86400*3), "%d/%m/%Y") | eval dayMinus4=strftime(now()-(86400*4), "%d/%m/%Y") | eval dayMinus5=strftime(now()-(86400*5), "%d/%m/%Y")</query> <done> <set token="dayMinus0">$result.dayMinus0$</set> <set token="dayMinus1">$result.dayMinus1$</set> <set token="dayMinus2">$result.dayMinus2$</set> <set token="dayMinus3">$result.dayMinus3$</set> <set token="dayMinus4">$result.dayMinus4$</set> <set token="dayMinus5">$result.dayMinus5$</set> </done> </search> Then use $dayMinusN$ for each Title - where N is the number of days, like this:   Below is the full XML example of that dashboard above for you to play with if it helps: <dashboard version="1.1" theme="light"> <label>SplunkAnswers1</label> <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%d/%m/%Y") | eval dayMinus1=strftime(now()-86400, "%d/%m/%Y") | eval dayMinus2=strftime(now()-(86400*2), "%d/%m/%Y") | eval dayMinus3=strftime(now()-(86400*3), "%d/%m/%Y") | eval dayMinus4=strftime(now()-(86400*4), "%d/%m/%Y") | eval dayMinus5=strftime(now()-(86400*5), "%d/%m/%Y")</query> <done> <set token="dayMinus0">$result.dayMinus0$</set> <set token="dayMinus1">$result.dayMinus1$</set> <set token="dayMinus2">$result.dayMinus2$</set> <set token="dayMinus3">$result.dayMinus3$</set> <set token="dayMinus4">$result.dayMinus4$</set> <set token="dayMinus5">$result.dayMinus5$</set> </done> </search> <search id="baseTest"> <query>|tstats count where index=_internal by _time, host span=1d | eval daysAgo=floor((now()-_time)/86400)</query> <earliest>-7d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <row> <panel> <table> <title>$dayMinus0$</title> <search base="baseTest"> <query>| where daysAgo=0 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus1$</title> <search base="baseTest"> <query>| where daysAgo=1 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus2$</title> <search base="baseTest"> <query>| where daysAgo=2 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus3$</title> <search base="baseTest"> <query>| where daysAgo=3 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus4$</title> <search base="baseTest"> <query>| where daysAgo=4 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus5$</title> <search base="baseTest"> <query>| where daysAgo=5 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Ram2  Another approach would be to use a single query without any subsearch/apppend etc: index=test-index (("ERROR" Code=OPT OR Code=ONP) OR ("WARN" "User had issues with code" Code=OPT OR Code... See more...
Hi @Ram2  Another approach would be to use a single query without any subsearch/apppend etc: index=test-index (("ERROR" Code=OPT OR Code=ONP) OR ("WARN" "User had issues with code" Code=OPT OR Code=ONP code_ip IN(1001, 1002, 1003, 1004)) OR ("INFO" "POST" NOT "GET /authenticate/mmt" Code=OPT OR Code=ONP code_data IN(iias, iklm, oilk))) | bin _time span=1d | eval TOATL_ONIP1=if(match(_raw, "ERROR") AND (Code="OPT" OR Code="ONP"), 1, 0) | eval TOATL_ONIP2=if(match(_raw, "WARN") AND match(_raw, "User had issues with code") AND (Code="OPT" OR Code="ONP") AND code_ip IN(1001, 1002, 1003, 1004), 1, 0) | eval TOATL_ONIP3=if(match(_raw, "INFO") AND match(_raw, "POST") AND NOT match(_raw, "GET /authenticate/mmt") AND (Code="OPT" OR Code="ONP") AND code_data IN(iias, iklm, oilk), 1, 0) | stats sum(TOATL_ONIP1) as TOATL_ONIP1 sum(TOATL_ONIP2) as TOATL_ONIP2 sum(TOATL_ONIP3) as TOATL_ONIP3 by Code _time | eval Start_Date=strftime(_time, "%Y-%m-%d") | table Start_Date Code TOATL_ONIP1 TOATL_ONIP2 TOATL_ONIP3 This determines the ONIP number based on fields in the event and then does a stats to count each ONIP by Code.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Please provide the source code for your dashboard in a code block using the </> button to insert the text.
sorry for confusion! i want system date here in this test area in the dashboard.  
Verified in production , Smartstore will support upload and searching of reduced buckets. You need to turn off tsidx reduction for that index before enabling Smartstore config on it of course. 
I don't recall ever seeing dbconnect configured so that it sends to a HEC input outside of the HF it's running on. Theoretically it's possible - see https://docs.splunk.com/Documentation/DBX/3.18.2/D... See more...
I don't recall ever seeing dbconnect configured so that it sends to a HEC input outside of the HF it's running on. Theoretically it's possible - see https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/settingsconfspec but I must say I've never seen it configured this way. Anyway, first check your config., then debug apropriate HEC inputs.
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4)   How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0
Be also aware of the subsearch limitations. You can't run them for long and they have a limit for returned results. So if you run them over a big data set you might get incomplete results and since t... See more...
Be also aware of the subsearch limitations. You can't run them for long and they have a limit for returned results. So if you run them over a big data set you might get incomplete results and since the subsearches will get silently finalized you won't even know about it. So i  your case it would probably be better to search for all matching events initially and tag them according to matching specific set of conditions using conditional field assignment ( | eval something=if( [...] ))
I don't understand your objections vs. methods 1 and 2. Complexity of the json structure shouldn't matter as long as the event is a valid json and - in case 2 - doesn't exceed maximum number of field... See more...
I don't understand your objections vs. methods 1 and 2. Complexity of the json structure shouldn't matter as long as the event is a valid json and - in case 2 - doesn't exceed maximum number of fields handled by auto-kv.
Hi @Corky_  Regarding the first option of applying the span after the _time and before other fields in the "BY" of your tstats command, I personally prefer to put the span at the end rather than in ... See more...
Hi @Corky_  Regarding the first option of applying the span after the _time and before other fields in the "BY" of your tstats command, I personally prefer to put the span at the end rather than in the middle of the by list to keep it cleaner and not to be confused with a field. The tstats docs also suggests it should be at the end:  [ BY (<field-list> | (PREFIX(<field>))) [span=<timespan>]]  The second query Im confused as to how you could bin by _time with tstats if you havent specified _time in the by clause initially. If you do not split by _time in the initial part of the query then the _time field wont be available to the bin command.  FWIW - I find the bin command good for doing stats by multiple fields over _time, when you cannot do with timechart.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @juhiacc  The error java.io.IOException: There are no Http Event Collectors available at this time indicates that the Splunk DB Connect application, running on your Heavy Forwarder (HF), cannot s... See more...
Hi @juhiacc  The error java.io.IOException: There are no Http Event Collectors available at this time indicates that the Splunk DB Connect application, running on your Heavy Forwarder (HF), cannot successfully send data to the configured HTTP Event Collector (HEC) endpoint(s) on your Splunk indexers. This usually stems from one of the following issues: HEC Not Enabled or Misconfigured on Indexers: Verify that HEC is enabled globally and available on all indexers/HFs which DB Connect is directed to. Confirm the specific HEC token used by your DB Connect input is enabled and valid on all appropriate hosts. Ensure the index specified in the HEC token configuration and/or the DB Connect input exists and is not disabled. Incorrect HEC Configuration in DB Connect: Within the DB Connect App on the HF, Ensure HEC is configured correctly, Network Connectivity Issues: Confirm the HF can reach the indexer(s) on the HEC port (default 8088). Check firewalls between the HF and indexers. Use tools like curl or telnet from the HF to test connectivity to https://<indexer_hostname_or_IP>:8088. If using a load balancer in front of your indexers for HEC, ensure it is configured correctly and all backend indexer nodes are healthy and responding. Indexer(s) Overloaded or Unavailable: Check the health of your indexers using the Monitoring Console (Monitoring Console -> Indexing -> Performance -> Indexer Performance). Overloaded indexers might refuse HEC connections. Ensure the indexers are running and accessible. Additional tips: If you have multiple indexers or an indexer cluster, ensure the HEC configuration is consistent across all relevant nodes. If using deployment server to manage the DB Connect app configuration on the HF, ensure the correct config is deployed Check splunkd.log on both the HF and the target indexer(s) for more detailed connection errors or HEC processing issues around the time the DB Connect job fails. Relevant Documentation worth checking: Splunk DB Connect Overview: https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/AboutSplunkDBConnect Set up and use HTTP Event Collector in Splunk Web: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector Troubleshooting DB Connect: https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4) How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0
@juhiacc  If you have trouble receiving data from DBX3, search the internal index for xx-xx-xxxx xx:xx:xx.xxx +0000 FATAL HTTPServer - Could not bind to port 8088 you could do: netstat -anp | gre... See more...
@juhiacc  If you have trouble receiving data from DBX3, search the internal index for xx-xx-xxxx xx:xx:xx.xxx +0000 FATAL HTTPServer - Could not bind to port 8088 you could do: netstat -anp | grep 8088 And ensure that you see the Splunk process using the port number. You might also see the port 8088 in your metrics.log file of your Splunk server receiving the traffic if there is data coming through... Is there a firewall between your DB connect server and the HEC server? Ensure the port(s) are availble Ensure on Splunk HEC server, you have global settings enabled: Click Settings > Data Inputs. Click HTTP Event Collector. Click Global Settings. In the All Tokens toggle button, select Enabled. Some other aspects to check and troubleshoot: #Check if the Hec collector is healthy curl -k -X GET -u admin:mypassword https://MY_Splunk_HEC_SERVER:8088/services/collector/health/1.0 #Check if HEC stanzas with config are configured /opt/splunk/bin/splunk http-event-collector list -uri https://MY_Splunk_HEC_SERVER:8089 #Check the settings using btool /opt/splunk/bin/splunk cmd btool inputs list --debug http   
@Arun2  Splunk's trial accounts are designed to attract potential enterprise customers who are likely to purchase a paid license after evaluation. A business email ties the trial to a company, maki... See more...
@Arun2  Splunk's trial accounts are designed to attract potential enterprise customers who are likely to purchase a paid license after evaluation. A business email ties the trial to a company, making it easier for Splunk's sales team to follow up and convert trial users into paying customers. Personal emails don’t provide this connection, reducing the likelihood of a sales lead   Personal email addresses are easier to create in bulk (e.g., multiple Gmail accounts), which could lead to abuse, such as creating multiple trial accounts to bypass usage limits (e.g., the 500 MB/day indexing limit for Splunk Enterprise or the 14-day Splunk Cloud trial). By requiring a business email, Splunk reduces the risk of such exploitation.
appendcols does not correlate values from existing columns, try using append and then final stats with values() and by Start_Date and Code index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time sp... See more...
appendcols does not correlate values from existing columns, try using append and then final stats with values() and by Start_Date and Code index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time |append [|search index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time] |append [|search index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" Code=OPT OR Code=ONP |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time] |eval Start_Date=srftime(_time, "%Y-%m-%d") |stats values(TOATL_ONIP1) as TOATL_ONIP1 values(TOATL_ONIP2) as TOATL_ONIP2 values(TOATL_ONIP3) as TOATL_ONIP3 by Start_Date Code
Hi @Arun2  I believe this is to prevent abuse of the trial system, free email services are often excluded from creating trial accounts because it is easy for people to keep requesting new trials wit... See more...
Hi @Arun2  I believe this is to prevent abuse of the trial system, free email services are often excluded from creating trial accounts because it is easy for people to keep requesting new trials with new email addresses.  It may also be covered in the Terms and Conditions that a trial is for business trial usage rather than personal development.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Great, it worked for me as well. I’ll go ahead and mark this as the answer. I’ll also raise a support ticket to solve this bug. Thanks, @livehybrid  Regards,
If you do tstats by time without binning and then do bin, you'll have to stats again to summarise your data. Bin on its own doesn't aggregate data, just aligns the field into discrete points.
There is one more thing I missed. You said that you're receiving the events via HEC input. Which endpoint are you using? I'd have to check about the /raw endpoint but the /event endpoint bypasses lin... See more...
There is one more thing I missed. You said that you're receiving the events via HEC input. Which endpoint are you using? I'd have to check about the /raw endpoint but the /event endpoint bypasses line breaking completely. So regardless of whatever you set as line breaker, the events are coming in as they are received.