All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to build a report for AWS FlowLogs which can be used to analyze SG usage. Specifically, I want a list of incoming traffic (by 'dest_ip') which shows all IP/port combinations. Unfortunatel... See more...
I am trying to build a report for AWS FlowLogs which can be used to analyze SG usage. Specifically, I want a list of incoming traffic (by 'dest_ip') which shows all IP/port combinations. Unfortunately, a simple 'stats count by dest_ip,dest_port,protocol,src_ip,src_port' does not result in a usable report -- because all the stateful return-traffic is listed, too. There are 10K's worth of incoming packets with dest_port in the 1024-65535 range, i.e., where that particular 'dest' server had initiated a connection using an ephemeral local port and then the return traffic went to the same port. So 99% of the 'incoming' ports are not actual listeners which we need to include in our SGs. I have spent hours testing various combinations of filters, e.g. count<5, or dest_port>18000 or (dest_port>1024 AND src_port<1024) or even a 'where NOT IN(src_port,22,53,80,3389, etc)'. But we have a lot of services which use high-port numbers so all these methods accidentally remove valid traffic. Instead, I think the only accurate method would be one where each connection is evaluated for: - is the incoming 'dest_port' above 1024? - if so, is there a corresponding packet in the preceding 1000 ms, i.e., identical-but-reversed dest and src IP/ports? - if so, assume this later packet is the return from a stateful request sent on an ephemeral port -- remove it from the results! Has anyone else run into this situation, and what was your solution? Thank you for any suggestions!
Hello dear, I want send report email with visualization (which is customized on statistics page) while selected inline > table on schedule options. I know attached dashboard pdf but i want see in t... See more...
Hello dear, I want send report email with visualization (which is customized on statistics page) while selected inline > table on schedule options. I know attached dashboard pdf but i want see in the email body. I hope i am enough clarify. Regards,
Scenario: two different source types being sent to UF (snort and firewall) from the same IP/source. format of data is JSON (see below example) { "client": {"ip": "...",... ... ... See more...
Scenario: two different source types being sent to UF (snort and firewall) from the same IP/source. format of data is JSON (see below example) { "client": {"ip": "...",... ... "type": "snort" } { "fw": {"iptables": "...",... "client": {"ip": "...",... ... "type": "firewall" }{ "client": {"ip": "...",... ... "type": "snort" } { "fw": {"iptables": "...",... "client": {"ip": "...",... ... "type": "firewall" } I've tried using regexes like below to separate them. props.conf [udp:514] TRANSFORMS-set_sourcetype = snort,firewall transforms.conf [snort] REGEX = ^(.*?type\"\:)(?P<type>("([^}]|"")*")) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::$type [firewall] REGEX = ^(.*?type\"\:)(?P<type>("([^}]|"")*")) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::$type Looking for the best method to assign the sourcetypes. I do not have control over the data structure or how it is sent to the UF.
I have a table that looks like... CUSTOMER ADDRESS CONTACT A 10 A Road (111)-222-3334 ... See more...
I have a table that looks like... CUSTOMER ADDRESS CONTACT A 10 A Road (111)-222-3334 30 C Road (222)333-4444 B 20 B Lane (111)-221-1122 40 D Circle (444)-444-2222 Now I want to us the CUSTOMER name as the header in the table so that it looks like A ADDRESS CONTACT 10 A Road (111)-222-3334 30 C Road (222)333-4444 B 20 B Lane (111)-221-1122 40 D Circle (444)-444-2222
Hi Team, We have three time fields Time - Indexed time( CSV file uploaded time) Last_uploaded - Microservices latest deployed time Running_since - Microservices start time All time ... See more...
Hi Team, We have three time fields Time - Indexed time( CSV file uploaded time) Last_uploaded - Microservices latest deployed time Running_since - Microservices start time All time fields are in "%+" (Fri Apr 24 05:00:20 +08 2020) format and are in the same timezone Below fields are getting pushed to splunk through csv file Time,Org,Space,Microservices,State,Stack,Buildpacks,Last_uploaded,Total_instance,Running_instance,Instance_state,Running_since,Used_CPU,Used_memory_bytes,Total_memory_bytes,Used_disk_bytes,Total_disk_bytes Please help in how to create input panel for Last_uploaded, Running_since & what would be the query for the below requirement How to query all microservices deployed between particular dates example ( 14th April to 16th April ) How many microservices were started between particular days example ( 17th April to 20th April ) Tired few options but no luck luck | eval _time=strptime(Time,"%+") | eval Latest_deployment_time=strptime(Last_uploaded,"%+") | eval Instance_start_time=strptime(Running_since,"%+") Regards, Thomas Mathias
Hello Experts, While I am trying to make my custom Splunk command Global, I am unable to do so because of the below error. Splunk could not update permission for resource admin/commandsconf [HTTP... See more...
Hello Experts, While I am trying to make my custom Splunk command Global, I am unable to do so because of the below error. Splunk could not update permission for resource admin/commandsconf [HTTP 500] Splunkd internal error; [{'type':'ERROR','code':None. 'text':"This handler does not support the 'edit' action and cannot be used for ACL modification."}] Please help on this.
mvexpand metrics | spath input=metrics | rename "cityCode" as pcc | where if($selected_pcc|s$="all",like(pcc,"%"),like(pcc,$selected_pcc|s$)) | stats count as Total I use the command above to fi... See more...
mvexpand metrics | spath input=metrics | rename "cityCode" as pcc | where if($selected_pcc|s$="all",like(pcc,"%"),like(pcc,$selected_pcc|s$)) | stats count as Total I use the command above to filter the result by looking into the json field cityCode, and verify if the value equals to my dropdown value selection, by default all cityCode would be included ( % ). The issue is when the message does not have the cityCode field, the default select All cityCode will not work since the like (pcc,"%") would fail. Currently, the conditional selection is inside the where clause, Is there a way to do conditional selection outside the where clause, meaning if I did not select cityCode, the where clause should be ignored completely.
Hello, I'm new here and I wanted some help for this issue. My incident is getting many errors for a bucket replication that keeps flapping up/down. In the master dashboard I have the errors "sear... See more...
Hello, I'm new here and I wanted some help for this issue. My incident is getting many errors for a bucket replication that keeps flapping up/down. In the master dashboard I have the errors "search factor is not met" and "replication factor is not met" along with main page warnings like "msg='target doesn't have bucket now. ignoring' " and "making bucket serviceable, we have enough peers now " that suggests me it's flapping other than the up/down I see in the master dashboard. I have a little infrastructure with 1 Master 2 Indexers 1 Search Head 1 Heavy Forwarder My configuration on local (that should override the default server.conf) is fine having replication_factor=2 and search_factor=2 but it seems that no matter which change I apply the always stays up. I tried to resync the bucket but actually I'm not even sure it did it. However, among my fix up tasks I have 2, 1 for replication factor and 1 for search factor For what concern search factor I have the following: fixup reason: unmet rf current status: Missing enough suitable candidates to create searchable copy in order to meet replication policy. Missing={ default:1 } for what concern replication factor: fixup reason: unmet rf current status: empty could you please let me know? I have some basic knowledge of administration and clustering by reading Splunk docs but I'm not sure I am really into yet. splunk btool server list --debug give me an output whereas replication_factor in local config is 2 and in default config is 3 but as far as I know local config in this case should override the default one. I'm stuck! Thank you in advance
Im monitoring a JSON file and forwarding the data using UF to my indexers . Im having problems to extract the JSON fields . Here is my props file . Nothing is being extracted ( i was trying to upload... See more...
Im monitoring a JSON file and forwarding the data using UF to my indexers . Im having problems to extract the JSON fields . Here is my props file . Nothing is being extracted ( i was trying to upload a screenshot but i dont have enought points ) . i know its something with the props but im unable to figure it out . any help would be appreciated [test] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured disabled = false pulldown_type = true Thanks
I have looked at the SPLUNK documentation (https://docs.splunk.com/Documentation/Splunk/7.2.9/Alert/EmailNotificationTokens) regarding tokens that can be used in emails when a notable event triggers.... See more...
I have looked at the SPLUNK documentation (https://docs.splunk.com/Documentation/Splunk/7.2.9/Alert/EmailNotificationTokens) regarding tokens that can be used in emails when a notable event triggers. The notable event provides a title for the alert, however it appears that the tokens available in the notable event can't be sent via email. The only option seems to be using the description but sometimes the title of a notable event vs the description may and is different. Anyone else run into this and could suggest a way around this. It may be that a less informative (more generic) title is put in to the alert by using a something like an eval, for example | eval title="very generic alert for something" And then using this in $results.title$ but that seems like wasted effort. Any thoughts on how to get the title of the notable event to be sent on an email? Note using Enterprise Security.
has anyone successfully implemented user session timeouts on their SHC? We are experiencing users keeping multiple dashboards open, sometimes overnight, with multiple panels refreshing which is crea... See more...
has anyone successfully implemented user session timeouts on their SHC? We are experiencing users keeping multiple dashboards open, sometimes overnight, with multiple panels refreshing which is creating unnecessary load to the SHC. I've set the sessionTimeout to 15 minutes in the server.conf, however this does not seem to boot the user session since the dashboard refreshes every 5-10 minutes. Wondering if anyone had any success booting user sessions over 15-20 minutes due to inactivity by updating corresponding timeouts in the web.conf? Possible solution? web.conf [settings] tools.sessions.timeout = 15 ui_inactivity_timeout = 15 server.conf [general] sessionTimeout = 900s
exmaple below column1:column2 1:10 2:15 4:30 5:40 in this example, column1 is missing "3", i would like to create this record with a "0" value in column2. how can this be done ? th... See more...
exmaple below column1:column2 1:10 2:15 4:30 5:40 in this example, column1 is missing "3", i would like to create this record with a "0" value in column2. how can this be done ? thanks..
I have a DB Connect query selected *from TABLE which worked flawlessly until yesterday. Now this query, and 13 other queries, no longer function and result in this error, per Splunk DB Connect: ... See more...
I have a DB Connect query selected *from TABLE which worked flawlessly until yesterday. Now this query, and 13 other queries, no longer function and result in this error, per Splunk DB Connect: External search command 'dbxquery' returned error code 1. Script output = "HTTPError: HTTP 500 Internal Server Error -- External handler failed with code '1' and output: ''. See splunkd.log for stderr output. " I logged into the DB to run the query directly against the DB and it completed successfully. What else can I check? I've reviewed the splunkd.log but it does not give me any data that I can see.
According to https://docs.splunk.com/Documentation/Splunk/8.0.3/Indexer/AboutSmartStore#Current_restrictions_on_SmartStore_use "For multisite clusters, if any SmartStore indexes use report accele... See more...
According to https://docs.splunk.com/Documentation/Splunk/8.0.3/Indexer/AboutSmartStore#Current_restrictions_on_SmartStore_use "For multisite clusters, if any SmartStore indexes use report acceleration or data model acceleration, you must disable search affinity by setting all search heads to site0 ." Since Splunk Enterprise Security uses data model acceleration, would I be ill-advised to implement ES on SmartStore in a 2-site configuration? I'm concerned about performance. Help with this will be much appreciated!
I recently introduced a few parameters around different buckets like hot, warm, cold, etc. Now I need to see if the buckets are rotating based on the values I provided, and I am trying to find an ef... See more...
I recently introduced a few parameters around different buckets like hot, warm, cold, etc. Now I need to see if the buckets are rotating based on the values I provided, and I am trying to find an effective search to help. Parameters I recently introduced and want to validate based on bucket size and movement are: maxDataSize = auto_high_volume maxHotBuckets = 10 maxWarmDBCount = 15 maxTotalDataSizeMB = 512000 frozenTimePeriodInSecs = 7776000
I have inherited a Splunk installation from the previous administrator where there is a heavy forwarder and a UF installed on the same machine. Since this is a bad practice in terms of performanc... See more...
I have inherited a Splunk installation from the previous administrator where there is a heavy forwarder and a UF installed on the same machine. Since this is a bad practice in terms of performance, I am planning to remove the UF and copy the relevant inputs files to the Splunk Enterprise instance (which acts as a heavy forwarder). How can I avoid re-indexing the same logs when copying the inputs configuration from the HF to the UF (mainly Windows Events)? Thanks.
I have a new client that has files named as follows: xxxx.xxxx.log Splunk is not ingesting them. How can I ingest logs that have that type of naming convention. I believe splunk is only looking a... See more...
I have a new client that has files named as follows: xxxx.xxxx.log Splunk is not ingesting them. How can I ingest logs that have that type of naming convention. I believe splunk is only looking at the xxx.xxx and can't match it to the /*.log stanza I have in the inputs.conf.
Hello All, Hope You're well. how to check the retention SET time that data are being deleted using CLI and query into splunk? and how to change the retention into 3 months?
Hi Team, As part of the Microsoft Azure Add-on Splunk , we configured the application for Azure Event hub data collection, For the field connection string we added the Primary key includes below a... See more...
Hi Team, As part of the Microsoft Azure Add-on Splunk , we configured the application for Azure Event hub data collection, For the field connection string we added the Primary key includes below as below sample. Endpoint=******/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=8888888^^^888* In Connection string we copied the complete string starting from Endpoint=? I do see below errors,Kindly assist State.START: 0> 2020-04-24 08:13:49,369 INFO pid=31779 tid=MainThread file=connection.py:work:259 | 'Closing tlsio from a state other than TLSIO_STATE_EXT_OPEN or TLSIO_STATE_EXT_ERROR' Thanks, Subbu
Hi All, I want to generate logs line by line. After the first line was generated it will wait for 60 seconds before generating the second line. 529 29/03/20 12:49:13 000002 CASH WITHDRAWAL ... See more...
Hi All, I want to generate logs line by line. After the first line was generated it will wait for 60 seconds before generating the second line. 529 29/03/20 12:49:13 000002 CASH WITHDRAWAL 0,0000.00 ALL 0000000000*******910  12:49:35 TRANSACTION END *461*03/29/2020*12:49* *PRIMARY CARD READER ACTIVATED* *462*03/29/2020*18:39* *TRANSACTION START*  CARD INSERTED CARD: ************1806 DATE 29-03-20 TIME 18:39:07 18:39:08 ATR RECEIVED T=0  18:39:15 PIN ENTERED  18:39:17 OPCODE = B A DB 18:39:18 GENAC 1 : ARQC 18:39:20 GENAC 2 : TC ---------- 530 29/03/20 18:39:30 000001 BALANCE INQUIRY ALL 0000000000*******806  18:39:33 CARD TAKEN  18:39:38 TRANSACTION END *463*03/29/2020*18:39* *PRIMARY CARD READER ACTIVATED* *464*03/29/2020*23:36* *PRIMARY CARD READER ACTIVATED* *465*03/30/2020*06:13* *TRANSACTION START*  CARD INSERTED CARD: ************4417 DATE 30-03-20 TIME 06:13:45 06:13:46 ATR RECEIVED T=0  06:13:53 PIN ENTERED  06:14:07 OPCODE = A A DB  06:14:23 NOTES STACKED  06:14:25 CARD TAKEN Sample: 529 29/03/20 12:49:13 000002 - will wait another 60seconds before next line CASH WITHDRAWAL 0,0000.00 ALL - will wait another 60seconds before next line 0000000000******910 - *will wait another 60seconds before next line**  12:49:35 TRANSACTION END will wait another 60seconds before next line Hope you get my point.