All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I don't recall ever seeing dbconnect configured so that it sends to a HEC input outside of the HF it's running on. Theoretically it's possible - see https://docs.splunk.com/Documentation/DBX/3.18.2/D... See more...
I don't recall ever seeing dbconnect configured so that it sends to a HEC input outside of the HF it's running on. Theoretically it's possible - see https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/settingsconfspec but I must say I've never seen it configured this way. Anyway, first check your config., then debug apropriate HEC inputs.
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4)   How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0
Be also aware of the subsearch limitations. You can't run them for long and they have a limit for returned results. So if you run them over a big data set you might get incomplete results and since t... See more...
Be also aware of the subsearch limitations. You can't run them for long and they have a limit for returned results. So if you run them over a big data set you might get incomplete results and since the subsearches will get silently finalized you won't even know about it. So i  your case it would probably be better to search for all matching events initially and tag them according to matching specific set of conditions using conditional field assignment ( | eval something=if( [...] ))
I don't understand your objections vs. methods 1 and 2. Complexity of the json structure shouldn't matter as long as the event is a valid json and - in case 2 - doesn't exceed maximum number of field... See more...
I don't understand your objections vs. methods 1 and 2. Complexity of the json structure shouldn't matter as long as the event is a valid json and - in case 2 - doesn't exceed maximum number of fields handled by auto-kv.
Hi @Corky_  Regarding the first option of applying the span after the _time and before other fields in the "BY" of your tstats command, I personally prefer to put the span at the end rather than in ... See more...
Hi @Corky_  Regarding the first option of applying the span after the _time and before other fields in the "BY" of your tstats command, I personally prefer to put the span at the end rather than in the middle of the by list to keep it cleaner and not to be confused with a field. The tstats docs also suggests it should be at the end:  [ BY (<field-list> | (PREFIX(<field>))) [span=<timespan>]]  The second query Im confused as to how you could bin by _time with tstats if you havent specified _time in the by clause initially. If you do not split by _time in the initial part of the query then the _time field wont be available to the bin command.  FWIW - I find the bin command good for doing stats by multiple fields over _time, when you cannot do with timechart.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @juhiacc  The error java.io.IOException: There are no Http Event Collectors available at this time indicates that the Splunk DB Connect application, running on your Heavy Forwarder (HF), cannot s... See more...
Hi @juhiacc  The error java.io.IOException: There are no Http Event Collectors available at this time indicates that the Splunk DB Connect application, running on your Heavy Forwarder (HF), cannot successfully send data to the configured HTTP Event Collector (HEC) endpoint(s) on your Splunk indexers. This usually stems from one of the following issues: HEC Not Enabled or Misconfigured on Indexers: Verify that HEC is enabled globally and available on all indexers/HFs which DB Connect is directed to. Confirm the specific HEC token used by your DB Connect input is enabled and valid on all appropriate hosts. Ensure the index specified in the HEC token configuration and/or the DB Connect input exists and is not disabled. Incorrect HEC Configuration in DB Connect: Within the DB Connect App on the HF, Ensure HEC is configured correctly, Network Connectivity Issues: Confirm the HF can reach the indexer(s) on the HEC port (default 8088). Check firewalls between the HF and indexers. Use tools like curl or telnet from the HF to test connectivity to https://<indexer_hostname_or_IP>:8088. If using a load balancer in front of your indexers for HEC, ensure it is configured correctly and all backend indexer nodes are healthy and responding. Indexer(s) Overloaded or Unavailable: Check the health of your indexers using the Monitoring Console (Monitoring Console -> Indexing -> Performance -> Indexer Performance). Overloaded indexers might refuse HEC connections. Ensure the indexers are running and accessible. Additional tips: If you have multiple indexers or an indexer cluster, ensure the HEC configuration is consistent across all relevant nodes. If using deployment server to manage the DB Connect app configuration on the HF, ensure the correct config is deployed Check splunkd.log on both the HF and the target indexer(s) for more detailed connection errors or HEC processing issues around the time the DB Connect job fails. Relevant Documentation worth checking: Splunk DB Connect Overview: https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/AboutSplunkDBConnect Set up and use HTTP Event Collector in Splunk Web: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector Troubleshooting DB Connect: https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4) How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0
@juhiacc  If you have trouble receiving data from DBX3, search the internal index for xx-xx-xxxx xx:xx:xx.xxx +0000 FATAL HTTPServer - Could not bind to port 8088 you could do: netstat -anp | gre... See more...
@juhiacc  If you have trouble receiving data from DBX3, search the internal index for xx-xx-xxxx xx:xx:xx.xxx +0000 FATAL HTTPServer - Could not bind to port 8088 you could do: netstat -anp | grep 8088 And ensure that you see the Splunk process using the port number. You might also see the port 8088 in your metrics.log file of your Splunk server receiving the traffic if there is data coming through... Is there a firewall between your DB connect server and the HEC server? Ensure the port(s) are availble Ensure on Splunk HEC server, you have global settings enabled: Click Settings > Data Inputs. Click HTTP Event Collector. Click Global Settings. In the All Tokens toggle button, select Enabled. Some other aspects to check and troubleshoot: #Check if the Hec collector is healthy curl -k -X GET -u admin:mypassword https://MY_Splunk_HEC_SERVER:8088/services/collector/health/1.0 #Check if HEC stanzas with config are configured /opt/splunk/bin/splunk http-event-collector list -uri https://MY_Splunk_HEC_SERVER:8089 #Check the settings using btool /opt/splunk/bin/splunk cmd btool inputs list --debug http   
@Arun2  Splunk's trial accounts are designed to attract potential enterprise customers who are likely to purchase a paid license after evaluation. A business email ties the trial to a company, maki... See more...
@Arun2  Splunk's trial accounts are designed to attract potential enterprise customers who are likely to purchase a paid license after evaluation. A business email ties the trial to a company, making it easier for Splunk's sales team to follow up and convert trial users into paying customers. Personal emails don’t provide this connection, reducing the likelihood of a sales lead   Personal email addresses are easier to create in bulk (e.g., multiple Gmail accounts), which could lead to abuse, such as creating multiple trial accounts to bypass usage limits (e.g., the 500 MB/day indexing limit for Splunk Enterprise or the 14-day Splunk Cloud trial). By requiring a business email, Splunk reduces the risk of such exploitation.
appendcols does not correlate values from existing columns, try using append and then final stats with values() and by Start_Date and Code index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time sp... See more...
appendcols does not correlate values from existing columns, try using append and then final stats with values() and by Start_Date and Code index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time |append [|search index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time] |append [|search index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" Code=OPT OR Code=ONP |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time] |eval Start_Date=srftime(_time, "%Y-%m-%d") |stats values(TOATL_ONIP1) as TOATL_ONIP1 values(TOATL_ONIP2) as TOATL_ONIP2 values(TOATL_ONIP3) as TOATL_ONIP3 by Start_Date Code
Hi @Arun2  I believe this is to prevent abuse of the trial system, free email services are often excluded from creating trial accounts because it is easy for people to keep requesting new trials wit... See more...
Hi @Arun2  I believe this is to prevent abuse of the trial system, free email services are often excluded from creating trial accounts because it is easy for people to keep requesting new trials with new email addresses.  It may also be covered in the Terms and Conditions that a trial is for business trial usage rather than personal development.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Great, it worked for me as well. I’ll go ahead and mark this as the answer. I’ll also raise a support ticket to solve this bug. Thanks, @livehybrid  Regards,
If you do tstats by time without binning and then do bin, you'll have to stats again to summarise your data. Bin on its own doesn't aggregate data, just aligns the field into discrete points.
There is one more thing I missed. You said that you're receiving the events via HEC input. Which endpoint are you using? I'd have to check about the /raw endpoint but the /event endpoint bypasses lin... See more...
There is one more thing I missed. You said that you're receiving the events via HEC input. Which endpoint are you using? I'd have to check about the /raw endpoint but the /event endpoint bypasses line breaking completely. So regardless of whatever you set as line breaker, the events are coming in as they are received.
pretty sure your \s needs to be a new line which is not necessarily the same thing as whitespace like @PickleRick said this regex will get your breaks and only leave the footer on the last event and... See more...
pretty sure your \s needs to be a new line which is not necessarily the same thing as whitespace like @PickleRick said this regex will get your breaks and only leave the footer on the last event and break the header into its own event which you can just ignore all of this as long as your data format doesn't change LOL [\[\}]+([,\s\r\n]+){  
Query1: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time. Query2: index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |... See more...
Query1: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time. Query2: index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time. Query3: index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time. Combined query: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time |appendcols [|search index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time] |appendcols [|search index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" Code=OPT OR Code=ONP |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time] |eval Start_Date=srftime(_time, "%Y-%m-%d") |table Start_Date Code TOATL_ONIP1 TOATL_ONIP2 TOATL_ONIP3 Output for individual query1: Start_Date Code TOTAL_ONIP1 2025-04-01 OPT 2 2025-04-02 OPT 4 2025-04-03 OPT 0 2025-04-01 ONP 1 2025-04-02 ONP 2 2025-04-03 ONP 3 Output for individual query2: Start_Date Code TOTAL_ONIP2 2025-04-01 OPT 0 2025-04-02 OPT 0 2025-04-03 OPT 0 2025-04-01 ONP 4 2025-04-02 ONP 2 2025-04-03 ONP 3 Output for individual query3: Start_Date Code TOTAL_ONIP3 2025-04-01 OPT 0 2025-04-02 OPT 0 2025-04-03 OPT 9 2025-04-01 ONP 0 2025-04-02 ONP 6 2025-04-03 ONP 8 Combined query output: Start_Date Code TOTAL_ONIP1 TOTAL_ONIP2 TOTAL_ONIP3 2025-04-01 OPT 2 4 9 2025-04-02 OPT 4 2 6 2025-04-03 OPT 1 3 8 2025-04-01 ONP 2     2025-04-02 ONP 3     2025-04-03 ONP       When we combine the query the count is not matching with the individual queries. For example: on April1st for ONP for TOTAL_ONIP2 is 4 but in combined one it is showing null,  and 4 value updated in OPT april 1st 
The first example will produce a count of destinations, etc, for each hour of the search time window.  Something like this _time Processes.dest count 12:00 foo 2 12:00 bar 1 13:00 ... See more...
The first example will produce a count of destinations, etc, for each hour of the search time window.  Something like this _time Processes.dest count 12:00 foo 2 12:00 bar 1 13:00 foo 4 13:00 bar 2   The second example will produce counts by destination, etc.  The counts will not be broken down by time. Processes.dest count foo 6 bar 3   The bin command will have no effect because there is no _time field at that point. Putting span in the tstats command gives you control over the bin sizes.  Without span, tstats will choose a span it thinks best fits the data.
Does the file ever change? If so, I would index the file and then create a scheduled search to update the lookup based on the indexed data.   If it never changes, just import the file one time with... See more...
Does the file ever change? If so, I would index the file and then create a scheduled search to update the lookup based on the indexed data.   If it never changes, just import the file one time with the Lookup Editor App.
@pck_npluyaud i personally prefer @PickleRick option 2 and as @yuanliu mention if it isn't working it's because the json isn't properly formatted if the json isn't properly formatted and it's in-hou... See more...
@pck_npluyaud i personally prefer @PickleRick option 2 and as @yuanliu mention if it isn't working it's because the json isn't properly formatted if the json isn't properly formatted and it's in-house you can try to get it fixed, if it's a paid product that sucks and you can try to open a support ticket but good luck if you have to do a regex because you can't get the json fixed... go to regex101 and build your regex there make sure you are using the bare minimum escapes "\" and don't use any if you don't have to the .props file handles things ever so slightly differently than the Search GUI so they should both work with teeny tweaks but the cleanest version is the one you want in your props file
I personally like to put _time span=whatever like you have in your first example everywhere it will work (like with "timechart") since it works and it makes it clear what you are spanning.  For the ... See more...
I personally like to put _time span=whatever like you have in your first example everywhere it will work (like with "timechart") since it works and it makes it clear what you are spanning.  For the longest time I was not using timechart and span correctly until I learned you should put the span literally right next to the _time to make sure it is getting applied appropriately, so now I just do that everywhere  But to answer your real question...what is the technical difference...IDK