All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, I need some help about eventgen, for TA_Windows I need to create some samples for testing. My Env is Splunk 8.1.1 (LInux) and TA_windows and SA_Eventgen are all the newst versions. Over 40... See more...
Hello all, I need some help about eventgen, for TA_Windows I need to create some samples for testing. My Env is Splunk 8.1.1 (LInux) and TA_windows and SA_Eventgen are all the newst versions. Over 40 sample stanzas are in need, for time interval I used -15mins and now, interval 30 or 60, count for each sample is 10 or 15. The whole eventgen.conf file is pretty long about 6000 lines.  The issue is on Linux, always about half of samples running, some samples are never running, but there is no errors in conf. I tested again on Windows Splunk 8.1.1, the same conf file could let every sample runnable.  Is there some limitations maybe for eventgen conf file length or the number of samples? Or should I design a better scheduling for each sample? If so I need some advices, has someone a design of schedule to ensure each sample could run regularly?
index=_* OR index=* sourcetype=Kamailio BC="Current Billable Calls Count:" | rex field=_raw "Count:(?<Billablecalls>.*)" | timechart max(Billablecalls) index=_* OR index=* sourcetype=Kamailio NBC="C... See more...
index=_* OR index=* sourcetype=Kamailio BC="Current Billable Calls Count:" | rex field=_raw "Count:(?<Billablecalls>.*)" | timechart max(Billablecalls) index=_* OR index=* sourcetype=Kamailio NBC="Current NON-Billable Calls Count:" | rex field=_raw "Calls Count:(?<NonBillableCalls>.*)" | timechart max(NonBillableCalls) index=_* OR index=* sourcetype=Kamailio CAIB="Current Active Inbound Calls:" | rex field=_raw "Calls: (?<Inboundcalls>.*)" | timechart max(Inboundcalls)   The above three are separate searches but I would like to combine them and plot over a single Calls against Time chart. Is there any viable solution for this? Any degree of help will be appreciated.  
I have a very basic search query to display ID and it's respective name. There are 1.3 lakhs of data events under the respective sourcetype and all the events have ID and name field in it. When I ru... See more...
I have a very basic search query to display ID and it's respective name. There are 1.3 lakhs of data events under the respective sourcetype and all the events have ID and name field in it. When I run the search query to display the ID and name, only top 10,000 records are displaying. I have tried to display the results using stats command, table command, chart command and  fields + table command. In all of these methods only top 10k records are showing in the statistics section. But I need all the 1.3 lakh IDs and Names to be displayed so that I can output those data to a lookup file. Here is my search query index=main source=splunk_id_name.log sourcetype=id_metric host=xxx |stats values(name) by id |sort id |rename id AS ID name AS Name   Is this the limit of records which can be displayed in Splunk? Or am I missing with any other command? I need this very urgently. Could anyone please help me on this to get resolved as soon as possible.
Hi all, I am having this issue whereby the logs are not coming in from the UF > Local HF > Regional HF > Indexer There was also a local setting on limits.conf which could it possibly cause the bloc... See more...
Hi all, I am having this issue whereby the logs are not coming in from the UF > Local HF > Regional HF > Indexer There was also a local setting on limits.conf which could it possibly cause the blockup? [thruput] maxKBps = 30   On the Local HF splunkd.log, these are the errors - 02-17-2021 00:25:12.661 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:25:17.673 -0500 INFO TailReader - ...continuing. 02-17-2021 00:25:23.864 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:25:24.866 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:25:27.681 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:25:29.392 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:25:30.533 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:25:42.501 -0500 WARN HttpListener - Connection from 127.0.0.1:49251 didn't send us any data, disconnecting 02-17-2021 00:25:54.856 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:25:57.724 -0500 INFO TailReader - ...continuing. 02-17-2021 00:26:04.858 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:26:12.748 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:26:16.869 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:26:19.874 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:26:30.777 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:26:42.625 -0500 WARN HttpListener - Connection from 127.0.0.1:49305 didn't send us any data, disconnecting 02-17-2021 00:26:44.558 -0500 INFO TailReader - Continuing... 02-17-2021 00:26:49.560 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:27:31.935 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:27:32.885 -0500 INFO TailReader - ...continuing. 02-17-2021 00:27:33.088 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:27:37.886 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:27:41.867 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:27:42.635 -0500 WARN HttpListener - Connection from 127.0.0.1:49403 didn't send us any data, disconnecting 02-17-2021 00:27:47.902 -0500 INFO TailReader - ...continuing. 02-17-2021 00:27:56.078 -0500 WARN TcpInputProc - Stopping all listening ports. Queues blocked for more than 300 seconds 02-17-2021 00:27:56.078 -0500 INFO TcpInputProc - Stopping IPv4 port 514 02-17-2021 00:27:56.078 -0500 INFO TcpInputProc - Stopping IPv4 port 5514 02-17-2021 00:27:56.078 -0500 INFO TcpInputProc - Stopping IPv4 port 9997 02-17-2021 00:27:57.915 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:28:33.100 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:28:34.741 -0500 INFO TailReader - Continuing... 02-17-2021 00:28:37.968 -0500 INFO TailReader - ...continuing. 02-17-2021 00:28:39.753 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:28:41.866 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:28:42.906 -0500 WARN HttpListener - Connection from 127.0.0.1:49528 didn't send us any data, disconnecting 02-17-2021 00:28:42.976 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:28:44.768 -0500 INFO TailReader - Continuing... 02-17-2021 00:28:49.769 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...02-17-2021 00:25:12.661 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:25:17.673 -0500 INFO TailReader - ...continuing. 02-17-2021 00:25:23.864 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:25:24.866 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:25:27.681 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:25:29.392 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname..com_hostname_C83 02-17-2021 00:25:30.533 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:25:42.501 -0500 WARN HttpListener - Connection from 127.0.0.1:49251 didn't send us any data, disconnecting 02-17-2021 00:25:54.856 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:25:57.724 -0500 INFO TailReader - ...continuing. 02-17-2021 00:26:04.858 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:26:12.748 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:26:16.869 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:26:19.874 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:26:30.777 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:26:42.625 -0500 WARN HttpListener - Connection from 127.0.0.1:49305 didn't send us any data, disconnecting 02-17-2021 00:26:44.558 -0500 INFO TailReader - Continuing... 02-17-2021 00:26:49.560 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:27:31.935 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:27:32.885 -0500 INFO TailReader - ...continuing. 02-17-2021 00:27:33.088 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:27:37.886 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:27:41.867 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:27:42.635 -0500 WARN HttpListener - Connection from 127.0.0.1:49403 didn't send us any data, disconnecting 02-17-2021 00:27:47.902 -0500 INFO TailReader - ...continuing. 02-17-2021 00:27:56.078 -0500 WARN TcpInputProc - Stopping all listening ports. Queues blocked for more than 300 seconds 02-17-2021 00:27:56.078 -0500 INFO TcpInputProc - Stopping IPv4 port 514 02-17-2021 00:27:56.078 -0500 INFO TcpInputProc - Stopping IPv4 port 5514 02-17-2021 00:27:56.078 -0500 INFO TcpInputProc - Stopping IPv4 port 9997 02-17-2021 00:27:57.915 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:28:33.100 -0500 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.10.10.10_8089_hostname.com_hostname_C83 02-17-2021 00:28:34.741 -0500 INFO TailReader - Continuing... 02-17-2021 00:28:37.968 -0500 INFO TailReader - ...continuing. 02-17-2021 00:28:39.753 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:28:41.866 -0500 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? 02-17-2021 00:28:42.906 -0500 WARN HttpListener - Connection from 127.0.0.1:49528 didn't send us any data, disconnecting 02-17-2021 00:28:42.976 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-17-2021 00:28:44.768 -0500 INFO TailReader - Continuing... 02-17-2021 00:28:49.769 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
Hi, I'm trying to calculate the standard deviation for range of time to create an alert an know when the total of transactions are below 3x the standard deviation it trigger and alert.  index=data ... See more...
Hi, I'm trying to calculate the standard deviation for range of time to create an alert an know when the total of transactions are below 3x the standard deviation it trigger and alert.  index=data |bucket _time span=5m |dedup field28 |stats count as Total_transactions, stdev(Total) as Dev by _time, field37 |rename field37 as Source |table _time, Source, Total_transactions, Dev
Hi Everyone, I am trying to use  a lookup table and an index to get an output as a comparison of two fields from two different sources lookup has a field that is in the format like this (fieldA... See more...
Hi Everyone, I am trying to use  a lookup table and an index to get an output as a comparison of two fields from two different sources lookup has a field that is in the format like this (fieldA) aaa ddd fff index has a field that is in the format like this (fieldB) aaa.ccc.com ddd.ccc.com eee.ccc.com index=stream_dns dest_asset_tag=*dns OR dest_asset_tag=A | append [| inputlookup dnslookup.csv | table fieldA | rename fieldA as fieldB ] | stats count by  dest, fieldB The result should look like the missing fields from comparison of fieldA and fieldB in this format eee fff
One of the power user was running 20 searches before even thought he as privilege of 10 concurrent searches. Now his searches are failing after the splunk giving the error maximum 10 concurrent searc... See more...
One of the power user was running 20 searches before even thought he as privilege of 10 concurrent searches. Now his searches are failing after the splunk giving the error maximum 10 concurrent search lime reached. How can I fix this problem. We can not exceed the concurrency due to resource issue in the Splunk server.
I'm trying to dump this info into a scheduled lookup but these are just azuread UPNs that are appearing in the logs for whatever search time it's set to. How do I efficiently get ALL UPNs that are in... See more...
I'm trying to dump this info into a scheduled lookup but these are just azuread UPNs that are appearing in the logs for whatever search time it's set to. How do I efficiently get ALL UPNs that are in this rex format regardless of log time? index=azuread | rex field=initiatedBy.user.userPrincipalName "ex(?<GUID>\d+)z\@" | search GUID=* | dedup initiatedBy.user.userPrincipalName | table initiatedBy.user.userPrincipalName, GUID | outputlookup zguids.csv
We lost file monitor on multiple path on 2 specific windows servers, we're monitoring a very chatting high volume logs, it was working for a little bit, now it stopped ingesting the logs and we are g... See more...
We lost file monitor on multiple path on 2 specific windows servers, we're monitoring a very chatting high volume logs, it was working for a little bit, now it stopped ingesting the logs and we are getting the following error, warning and info in the log_level, need some help to troubleshoot on this, and isolating the bottleneck..... Thank   splunkd log: 01-17-2021 12:54:31.379 -0800 ERROR TailReader - Was unable to open file: D:\####\####\####\#####.log. 01-17-2021 18:10:07.686 -0800 WARN TailReader - Insufficient permissions to read file='D:\####\####\####\####.log' (hint: The system cannot find the file specified.) 01-18-2021 04:38:42.109 -0800 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 01-18-2021 04:38:42.109 -0800 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 01-19-2021 23:29:55.771 -0800 INFO PeriodicHealthReporter - feature="TailReader-0" color=red indicator="data_out_rate" due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." node_type=indicator node_path=splunkd.file_monitor_input.tailreader-0.data_out_rate
Hello Guys, I am preparing for Splunk Enterprise Admin certification and I am getting a bit confused by the documentation in Splunk docs. Namely, there are two different statements in distsearch.co... See more...
Hello Guys, I am preparing for Splunk Enterprise Admin certification and I am getting a bit confused by the documentation in Splunk docs. Namely, there are two different statements in distsearch.conf stanza, and not sure which one is the right one. Splunk/8.1.2/DistSearch/Configuredistributedsearch - here states: "Add the search peers To connect the search peers: 1. On the search head, create or edit a distsearch.conf file in $SPLUNK_HOME/etc/system/local. 2. Add the search peers to the servers setting under the [distributedSearch] stanza. Specify the peers as a set of comma-separated values (host names or IP addresses with management ports). For example: [distributedSearch] servers = https://192.168.1.1:8089,https://192.168.1.2:8089 Note: You must precede the host name or IP address with the URI scheme, either "http" or "https"."   Splunk/8.1.2/DistSearch/Distributedsearchgroups - the other one here states: "You define distributed search groups in distsearch.conf. For example, to create the two search groups NYC and SF, create stanzas like these: You define distributed search groups in distsearch.conf. For example, to create the two search groups NYC and SF, create stanzas like these: [distributedSearch] # This stanza lists the full set of search peers. servers = 192.168.1.1:8089, 192.168.1.2:8089, 175.143.1.1:8089, 175.143.1.2:8089, 175.143.1.3:8089 [distributedSearch:NYC] # This stanza lists the set of search peers in New York. default = false servers = 192.168.1.1:8089, 192.168.1.2:8089 [distributedSearch:SF] # This stanza lists the set of search peers in San Francisco. default = false servers = 175.143.1.1:8089, 175.143.1.2:8089, 175.143.1.3:8089   In the first example, it says that "http/https" is required in hostname/IP under servers variable in [distriburedSearch] stanza, the other one omits it and does not say anything about "http/https" as the required value. I am not at the stage of testing this myself yet, so was thinking maybe I can ask here.   Thanks in advance 
We are using splunk cloud, and the liimited information I have read about winhostMon://Service suggests that it may be an option to retrieve StartName which is the username associated with the logon ... See more...
We are using splunk cloud, and the liimited information I have read about winhostMon://Service suggests that it may be an option to retrieve StartName which is the username associated with the logon for the service but we cant seem to find why that is the only Field not being returned. Any help guidance and or suggestion would be greatly appreciated. Splunk Cloud Version:8.1.2011.1
I am pretty new to splunk and i have a query which uses TABLE command to filter output on certain fields. The output looks like: name           designation         salary ABC             Manager ... See more...
I am pretty new to splunk and i have a query which uses TABLE command to filter output on certain fields. The output looks like: name           designation         salary ABC             Manager               12345 XYZ             Clerk                         6789   I want to convert the output as: name=ABC, designation=Manager, salary=12345 name=XYZ, designation=Clerk, salary=6789 Not sure how to transform the data. Can anyone help?
We are at 91% so not immediately urgent but how do I find out why this is alerting? on one of 2 indexers. Other is at ~80 percent... We are new to splunk, been running for a few months now. First ti... See more...
We are at 91% so not immediately urgent but how do I find out why this is alerting? on one of 2 indexers. Other is at ~80 percent... We are new to splunk, been running for a few months now. First time this alert came up.  
  I have this dashboard and some of the dropdowns, but here in the last dropdown `hostname`, i have a "ALL" field value which is like host=*  and when putting into the search it works like this ... See more...
  I have this dashboard and some of the dropdowns, but here in the last dropdown `hostname`, i have a "ALL" field value which is like host=*  and when putting into the search it works like this , index=perfmon host=*. But the caveat here is , i want ALL(*) to be only the servers which are resulting from all the below dropdowns and not just host=* .   <input type="multiselect" token="name" searchWhenChanged="true"> <label>Hostname</label> <fieldForLabel>Hostname</fieldForLabel> <fieldForValue>name</fieldForValue> <search> <query>| inputlookup ec2_unix_linux_instances.csv | append [| inputlookup ec2_windows_instances.csv ] | search CLUSTER_TYPE=$cluster$ AND ACC_SHORT_NAME=$asn$ AND ACC_FULL_NAME="$afn$" AND ENVIRONMENT=$env$ BUSINESS_UNIT=$bu$ | rename HOST_NAME as name | join type=left name [| inputlookup splunk_total_agents.csv | table name Agent ] | join type=left name [| inputlookup splunk_total_unix_linux_agents.csv | table name Agent ] | dedup name | search Agent="Splunk" | fields name</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <valuePrefix>host="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <choice value="*">ALL</choice> </input>   Above is the code for my dropdown ( host) and here value=* just takes all the servers, whereas i want it to only consider the servers which are the results of all the filters.  
 Had Splunk Enterprise 7 running and recently updated to 8.1.2. After going over the fundamentals video I wanted to install using customize and Active directory on my Server 2016 Active Directory ser... See more...
 Had Splunk Enterprise 7 running and recently updated to 8.1.2. After going over the fundamentals video I wanted to install using customize and Active directory on my Server 2016 Active Directory server. So I removed Splunk and tried to reinstall using the customize. It get most of the way done and then it does a rollback.  Looked in C:\Program files\Splunk for logs and found Splunkd-utility 02-16-2021 14:49:56.946 -0500 INFO loader - Getting configuration data from: C:\Program Files\Splunk\etc\myinstall\splunkd.xml 02-16-2021 14:49:56.947 -0500 INFO loader - SPLUNK_MODULE_PATH environment variable not found - defaulting to C:\Program Files\Splunk\etc\modules 02-16-2021 14:49:56.947 -0500 INFO loader - loading modules from C:\Program Files\Splunk\etc\modules 02-16-2021 14:49:56.951 -0500 INFO loader - Writing out composite configuration file: C:\Program Files\Splunk\var\run\splunk\composite.xml 02-16-2021 14:49:56.972 -0500 WARN Pstacks - Backtracing is not initialized - GeneratePstacksAction cannot be used.. 02-16-2021 14:49:56.972 -0500 WARN WatchdogActions - Initialization failed for action=pstacks. Deleting. 02-16-2021 14:49:56.972 -0500 INFO loader - Service "Splunkd" does not exist 02-16-2021 14:49:56.972 -0500 INFO loader - Skipping validation of index paths because may not be running as the correct user 02-16-2021 14:49:56.972 -0500 INFO loader - Validated 10 indexes in 0 microseconds 02-16-2021 14:49:57.235 -0500 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 02-16-2021 14:49:57.235 -0500 INFO ServerConfig - Host name option is "". 02-16-2021 14:49:58.337 -0500 INFO loader - Getting configuration data from: C:\Program Files\Splunk\etc\myinstall\splunkd.xml 02-16-2021 14:49:58.337 -0500 INFO loader - SPLUNK_MODULE_PATH environment variable not found - defaulting to C:\Program Files\Splunk\etc\modules 02-16-2021 14:49:58.337 -0500 INFO loader - loading modules from C:\Program Files\Splunk\etc\modules 02-16-2021 14:49:58.337 -0500 INFO loader - Writing out composite configuration file: C:\Program Files\Splunk\var\run\splunk\composite.xml       Thanks.     
Would it be easier to use a custom phantom playbook to Add a user to a specific AD group from an event trigger, instead of creating a custom App in splunk using the App builder?
Hello,   It seems that my current process of quarantining a search peer and then running 'splunk offline' causes searches to become zombified. "This search has encountered a fatal error and h... See more...
Hello,   It seems that my current process of quarantining a search peer and then running 'splunk offline' causes searches to become zombified. "This search has encountered a fatal error and has been marked as zombied."   Is it best practice to quarantine the peer before or after running the splunk offline command? I know that running 'splunk offline' graceful haults new searches from reaching that indexer but for some reason, I think there is an interference when quarantining the host first and then running 'splunk offline'.   Thoughts?
Hi, I have a dashboard with a dropdown form allowing users to select the time period they wish to analyse. I am looking to capture the latest time token of the period in epoch format but I am runni... See more...
Hi, I have a dashboard with a dropdown form allowing users to select the time period they wish to analyse. I am looking to capture the latest time token of the period in epoch format but I am running into issues. I have found that if the end time is 'now', then I can use time() however this doesn't work for when the end time is in the past (i.e. yesterday, previous week or previous year). Can anyone assist me in figuring this out? Many thanks, Dave
Hello, Qualys scans in our environment reporting issues on SSL and TLSv1.0, TLSv1.1. We are able to fix the TLS ones, not sure how to fix the below SSL ones. Please let me know if any one has any ... See more...
Hello, Qualys scans in our environment reporting issues on SSL and TLSv1.0, TLSv1.1. We are able to fix the TLS ones, not sure how to fix the below SSL ones. Please let me know if any one has any ideas. SSL/TLS Compression Algorithm Information Leakage Vulnerability Birthday attacks against TLS ciphers with 64bit block size vulnerability (Sweet32) SSL Certificate - Expired SSL Certificate - Self-Signed Certificate SSL Certificate - Subject Common Name Does Not Match Server FQDN SSL Certificate - Signature Verification Failed Vulnerability   Thank you.
I want to forward logs to third party system (syslog) without index these data into splunk but i can't accomplish it, help. On my heavy forwarder i set up outputs.conf, transforms.conf, props.conf a... See more...
I want to forward logs to third party system (syslog) without index these data into splunk but i can't accomplish it, help. On my heavy forwarder i set up outputs.conf, transforms.conf, props.conf as follow:   outuputs.conf [syslog:my_syslog_group] server = <IP>:PORT   transforms.conf [send_to_syslog] REGEX = MY REGEX DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group [not_send_to_syslog] REGEX = MY REGEX DEST_KEY =queue FORMAT =nullQueue     props.conf [source::MY_SOURCE] TRANSFORMS-t0=send_to_syslog,not_send_to_syslog In this way logs don't forward to my syslog, they will be just deleted and not indexed. Removing [not_send_to_syslog] from props and transforms data will be indexed on splunk and also forwarded to syslog. How can i achieve my problem, sending data to syslog and not indexing them on splunk? Thanks in advantage to those who will help me.