All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Avantika Did you deploy the configuration from the deployment server to the heavy forwarder? Is the heavy forwarder connected to the deployment server?
Ok, now I understand your requirements. You can do it with props.conf & transforms.conf too.  MV_ADD = <boolean> * NOTE: This setting is only valid for search-time field extractions. * Optional. Con... See more...
Ok, now I understand your requirements. You can do it with props.conf & transforms.conf too.  MV_ADD = <boolean> * NOTE: This setting is only valid for search-time field extractions. * Optional. Controls what the extractor does when it finds a field which already exists. * If set to true, the extractor makes the field a multivalued field and appends the newly found value, otherwise the newly found value is discarded. * Default: false This parameter add multiple values in mv field.  I haven’t use field extractor so much that I cannot recall if there are any options to do same or not, but I think that this MV_ADD is your solution.
when running index="index1" | search "slot"    its giving below events. which has time, hostname as well. events: {"priority":6,"sequence":4704,"sec":695048,"usec":639227,"msg":"hv_netvsc 54243fd... See more...
when running index="index1" | search "slot"    its giving below events. which has time, hostname as well. events: {"priority":6,"sequence":4704,"sec":695048,"usec":639227,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd eth0: VF slot 1 added\n SUBSYSTEM=vmbus\n DEVICE=+vmbus:54243fd-13dc-6045-bddd-13dc6045bdda"} {"priority":6,"sequence":4698,"sec":695037,"usec":497286,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd eth0: VF slot 1 removed\n SUBSYSTEM=vmbus\n DEVICE=+vmbus:54243fd-13dc-6045-bddd-13dc6045bdda"}   my requirement is I need a difference of time between message removed and added for the particular day. i.e It should not add previous events.
@gcusello  Since I'm testing these conf files in my sandpit env first, my conf files are under etc/apps/git/splunk-deployment-apps/parsing_syslog/local(mylocal branch) These are the UDP logs a... See more...
@gcusello  Since I'm testing these conf files in my sandpit env first, my conf files are under etc/apps/git/splunk-deployment-apps/parsing_syslog/local(mylocal branch) These are the UDP logs and the inputs.conf is configured by user on their end                                                                                                                                                                                                          
Cracked it  | gentimes start=12/01/2024 end=-1 increment=1d | eval _time=starttime | eval countries="Sweden,Finland,Estonia,Lithuania,Norway,Latvia" | makemv delim="," countries | mvexpand co... See more...
Cracked it  | gentimes start=12/01/2024 end=-1 increment=1d | eval _time=starttime | eval countries="Sweden,Finland,Estonia,Lithuania,Norway,Latvia" | makemv delim="," countries | mvexpand countries | eval value=round(random() % 100, 0) | timechart max(value) by countries
Subject: Splunk SOAR Upgrade Issue - 6.2.2.134 to 6.3.1.178 Dear VatsalJagani and the Splunk SOAR community, First thanks for your feedback sir(VatsalJagani) on my previous concerns... I hope this... See more...
Subject: Splunk SOAR Upgrade Issue - 6.2.2.134 to 6.3.1.178 Dear VatsalJagani and the Splunk SOAR community, First thanks for your feedback sir(VatsalJagani) on my previous concerns... I hope this email finds you well in the New Year. I am writing to request your assistance with an issue I am encountering while upgrading Splunk SOAR from version 6.2.2.134 to 6.3.1.178 on an Amazon Linux instance using Putty. I have successfully installed both versions of Splunk SOAR on the instance. However, when attempting to upgrade from 6.2.2.134 to 6.3.1.178 using the script provided in the Splunk documentation, I am encountering the following errors: Failed to detect installed version Failed to initialize deployment I have carefully reviewed and followed the steps outlined in the Splunk documentation, but the issue persists. I would greatly appreciate any guidance or suggestions you or anyone may have to resolve this upgrade issue. Thank you for your time and assistance. Sincerely,
MV fields are fine. In fact, that's how it extracts when using rex directly. In this case, though, despite using the *exact* same regex, it only extracts the first of the attachments in the dummy dat... See more...
MV fields are fine. In fact, that's how it extracts when using rex directly. In this case, though, despite using the *exact* same regex, it only extracts the first of the attachments in the dummy data when put in as a proper field using the Field Extractor. That said, the regex is made by myself. Splunk didn't generate it. I put it in manually using the field extractor. I'm try to have the fields extracted, because aside from being useful data, I want to use the field in my base search to say something like attachments=* but obviously I can't do that before I extract it with rex......
Hi Trying to display some data with single value (and sparkline) with the single value viz. I want to break the result up over "countries", but nothing comes out. I get a sparkline, but the rest a... See more...
Hi Trying to display some data with single value (and sparkline) with the single value viz. I want to break the result up over "countries", but nothing comes out. I get a sparkline, but the rest are all zero's | gentimes start=01/01/2024 end=01/31/2024 increment=1d | eval _time=starttime | eval countries="Sweden,Finland,Estonia,Lithuania,Norway,Latvia" | makemv delim="," countries | mvexpand countries | eval value=round(random() % 100, 0) | streamstats count | sort countries | timechart span=1d avg(value) by countries Dashboard 
Hi I think that your issue is setting sourcetype inside source stanza. As splunk has only one linear data pipeline and now it has taken those events based on source:: definition it will apply only t... See more...
Hi I think that your issue is setting sourcetype inside source stanza. As splunk has only one linear data pipeline and now it has taken those events based on source:: definition it will apply only those values on indexing phase. You cannot put events back into the start of this pipeline again and start same event manipulation with sourcetype stanza.  Your aws:elb:accesslog definitions are used, but only in search time, not in index time. And as those definitions are affecting only in index time it's obviously that nothing happen for you _time value. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Configurationparametersandthedatapipeline Have you try to add those same definitions under every source:: stanzas? Of course as you are using HEC it also mater which endpoint you are using. There are differences what manipulations you can do with props.conf based on endpoint. r. Ismo r. Ismo
Interesting - I see now - yes, you can see that at the start there are event listeners for the button but once the first call to set(name, value) in SetToken is called, it will remove the event ... See more...
Interesting - I see now - yes, you can see that at the start there are event listeners for the button but once the first call to set(name, value) in SetToken is called, it will remove the event listener, when this trigger() function is called as the value changes. Don't know why, but it doesn't get reset, so there is no longer an event listener  
If I recall correctly there was same questions some time ago, but I cannot found it now. Anyhow the answer was same on that time too. Maybe you can use the next solutions as a work around? Is it p... See more...
If I recall correctly there was same questions some time ago, but I cannot found it now. Anyhow the answer was same on that time too. Maybe you can use the next solutions as a work around? Is it possible that you will change sourcetype to be e.g. <host>:<original sourcetype> for those events which you are forwarding to AWS's S3 buckets? In that way you full will your requirements to store those based on hostname?
Wait a minute. You said that you have 1TB SSD + 3TB SAS disks and you are ingesting 600-800GB per day. How many indexers you have? I really hope that you have clusters which contains several indexer... See more...
Wait a minute. You said that you have 1TB SSD + 3TB SAS disks and you are ingesting 600-800GB per day. How many indexers you have? I really hope that you have clusters which contains several indexers. And have you any Splunk premium apps like ES or ITSI running on your environment? So which kind of architecture you currently have to manage your workload? And what kind of nodes you have in resource point of view? Basically your understanding is quite correct. With this daily ingestion amount you definitely must have a cluster or at least several indexers take data in and serving searches at same time. Do you have a MC (monitoring console) up and running? This is excellent tool do get more information what is happening in your environment. Volumes are excellent way to manage your indexers space. I usually define one volume for hot+warm and another for cold. You must remember that you shouldn't allocate all disk space for use there must be some additional free space for disk operations. How much this is, depending on your filesystem. Some kind rule of thumb is 10-20%. Also you shouldn't allocate all filesystem space to Splunk Volume leave also there some room as splunk will need it when if flush data from warm to cold and also from cold to frozen if you have separate frozen space.  
We're sending AWS ELB Access logs (Classic ELB, NLB and ALB) using Lambda to HEC.  I have installed the Splunk add-on for AWS on SH and HEC . The add-on has regexes to parse the access logs and all t... See more...
We're sending AWS ELB Access logs (Classic ELB, NLB and ALB) using Lambda to HEC.  I have installed the Splunk add-on for AWS on SH and HEC . The add-on has regexes to parse the access logs and all the fields extractions from REGEX for access logs seems to be working fine. However, we're having issues with the timestamp of the event, which is also extracted as "timestamp" field and the _time is getting assigned as ingestion time instead of actual time from the event.  I tried to add timestamp PREFIX in the props.conf in Splunk_TA_AWS for the aws:elb:access logs sourcetype, however, it doesn't work.     Sample events -  NLB -  tls 2.0 2025-01-15T23:59:54 net/loadbalancerName/guid 10.xxx.xxx.1:32582 10.xxx.x.xx:443 1140251 85 3546 571 - arn:aws:acm:us-west-2:026921344628:certificate/guid - ECDHE-RSA-XXXX-GCMXXX tlsv12 - example.io - - - 2025-01-15T23:40:54 ALB -  https 2018-07-02T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 192.168.131.39:2817 10.0.0.1:80 0.086 0.048 0.037 200 200 0 57 "GET https://www.example.com:443/ HTTP/1.1" "curl/7.46.0" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337281-1d84f3d73c47ec4e58577259" "www.example.com" "arn:aws:acm:us-east-2:123456789012:certificate/12345678-1234-1234-1234-123456789012" 1 2018-07-02T22:22:48.364000Z "authenticate,forward" "-" "-" "10.0.0.1:80" "200" "-" "-" TID_123456 ELB -  2018-12-31T00:08:01.715269Z loadbalancerName 187.xx.xx.xx:48364 - -1 -1 -1 503 0 0 0 "GET http://52.x.xxx.xxx:80/ HTTP/1.1" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" - -   props.conf   ## Classic Load Balancer ## [source::http:lblogs] EXTRACT-elb = ^\s*(?P<timestamp>\S+)(\s+(?P<elb>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<backend>\S+))(\s+(?P<request_processing_time>\S+))(\s+(?P<backend_processing_time>\S+))(\s+(?P<response_processing_time>\S+))(\s+(?P<elb_status_code>\S+))(\s+(?P<backend_status_code>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+"(?P<request>[^"]+)")(\s+"(?P<user_agent>[^"]+)")(\s+(?P<ssl_cipher>\S+))(\s+(?P<ssl_protocol>\S+)) EVAL-rtt = request_processing_time + backend_processing_time + response_processing_time sourcetype = aws:elb:accesslogs ## Application Load Balancer ## [source::http:aws-lblogs] EXTRACT-elb = ^\s*(?P<type>\S+)(\s+(?P<timestamp>\S+))(\s+(?P<elb>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<target>\S+))(\s+(?P<request_processing_time>\S+))(\s+(?P<target_processing_time>\S+))(\s+(?P<response_processing_time>\S+))(\s+(?P<elb_status_code>\S+))(\s+(?P<target_status_code>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+"(?P<request>[^"]+)")(\s+"(?P<user_agent>[^"]+)")(\s+(?P<ssl_cipher>\S+))(\s+(?P<ssl_protocol>\S+))(\s+(?P<target_group_arn>\S+))(\s+"(?P<trace_id>[^"]+)")(\s+"(?P<domain_name>[^"]+)")?(\s+"(?P<chosen_cert_arn>[^"]+)")?(\s+(?P<matched_rule_priority>\S+))?(\s+(?P<request_creation_time>\S+))?(\s+"(?P<actions_executed>[^"]+)")?(\s+"(?P<redirect_url>[^"]+)")?(\s+"(?P<error_reason>[^"]+)")? EVAL-rtt = request_processing_time + target_processing_time + response_processing_time priority = 1 sourcetype = aws:elb:accesslogs ## Network Load Balancer ## [source::http:lblogs] EXTRACT-elb-nlb = ^\s*(?P<type>\S+)(\s+(?P<log_version>\S+))(\s+(?P<timestamp>\S+))(\s+(?P<elb>\S+))(\s+(?P<listener>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<destination_ip>[\d.]+):(?P<destination_port>\d+))(\s+(?P<connection_time>\S+))(\s+(?P<tls_handshake_time>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+(?P<incoming_tls_alert>\S+))(\s+(?P<chosen_cert_arn>\S+))(\s+(?P<chosen_cert_serial>\S+))(\s+(?P<tls_cipher>\S+))(\s+(?P<tls_protocol_version>\S+))(\s+(?P<tls_named_group>\S+))(\s+(?P<domain_name>\S+))(\s+(?P<alpn_fe_protocol>\S+))(\s+(?P<alpn_be_protocol>\S+))(\s+(?P<alpn_client_preference_list>\S+)) sourcetype = aws:elb:accesslogs [aws:elb:accesslogs] TIME_PREFIX = ^.*?(?=20\d\d-\d\d) TIME_FORMAT = MAX_TIME_LOOKAHEAD      
I have configured a OAUTH client ID and secret on my client's ServiceNow instance. I configured the account in the Splunk Add-on for ServiceNow application. The configuration completed without issue.... See more...
I have configured a OAUTH client ID and secret on my client's ServiceNow instance. I configured the account in the Splunk Add-on for ServiceNow application. The configuration completed without issue. I was then able to configure an input to pull from the CMDB database using the OAUTH credentials. However when I try to pull the "sn_si_incident" table from the SIR database I'm getting the message "Insufficient rights to query records: Fields present in the query do not have permission to be read". When I configured the OAUTH credentials in the add-on I used an account (e.g. svc_account1) that I know has permissions to read from this table. We have also tested with Postman and can pull from the security incident table. In Postman we configured the client ID/secret as well as the username and password (using svc_account1).  We noticed that when we try using the OAUTH using Postman the user is the correct user (svc_account1). However when we use the Splunk add-on the user is my user account. Has anyone every tried to use OAUTH to access the security database tables? Is the add-on built to handle the security database tables? I wonder about this because when I try to select a table from the dropdown I don't see "sn_si_incident" (probably because the only tables available are from the CMDB database). Thanks.
Hi I did't get why you cannot use that rex which is working? In personally I always prefer to use my own rex than those which are created by field extractor. It's splunk's design decision that if t... See more...
Hi I did't get why you cannot use that rex which is working? In personally I always prefer to use my own rex than those which are created by field extractor. It's splunk's design decision that if there are multiple matches then those are put in mv fields. You can always expand those into individual events if mv fields are not suitable for your use case. | makeresults | eval _raw = "orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, {'NotSecrets!!.txt': 'fileHash': 'a3b9adaee5b83973e8789edd7b04b95f25412c764c8ff29d0c63abf25b772646'}, {}}, 'Secrets!!.txt': 'fileHash': 'c092a4db704b9c6f61d6a221b8f0ea5f719e7f674f66fede01a522563687d24b'}, {}}} orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc," | rex max_match=0 "(?:'(?<attachments>.*?)'.*?'fileHash': '(?<sha256>\w+)'}.*?\{.*?\}\}[,\}]\s?)" | eval foo = mvzip(attachments,sha256,";-;") | mvexpand foo | eval foo=split(foo,";-;") | eval attachments=mvindex(foo,0) | eval sha256=mvindex(foo,1) | table attachments sha256 r. Ismo
It replace all existing fields so you don’t need to write everything here. You could also add e.g. values(foo*) as bar* and then it takes only those fields which start with foo and put those as a res... See more...
It replace all existing fields so you don’t need to write everything here. You could also add e.g. values(foo*) as bar* and then it takes only those fields which start with foo and put those as a result fields named bar*. This is quite useful and commonly used feature in SPL.
| appendpipe [| stats count | where count==0]
| makeresults format=csv data="raw CSR-345sc453-a2da-4850-aacb-7f35d5127b21 - Sending error response back in 2136 msecs. 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A ... See more...
| makeresults format=csv data="raw CSR-345sc453-a2da-4850-aacb-7f35d5127b21 - Sending error response back in 2136 msecs. 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; 00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z 01hg34hgh44hghg4 - Exception while calling System A - null Exception message - CSR-a4cd725c-3d73-426c-b254-5e4f4adc4b26 - Generating exception because of multiple stage failure - abc_ELIGIBILITY 0013c5fb1737577541466 - Exception message - 0013c5fb1737577541466 - Generating exception because of multiple stage failure - abc_ELIGIBILITY b187c4411737535464656 - Exception message - b187c4411737535464656 - Exception in abc module. Creating error response - b187c4411737535464656 - Response creation couldn't happen for all the placements. Creating error response." | rex field=raw max_match=0 "(\b)(?<words>[A-Za-z'_]+)(\b|$)(?!\-)" | eval words = mvjoin(words, " ")
I think this fixed my issue! thanks! just out of curiosity, what does adding the values(*) do, not sure I have seen that before  
I gotcha, thank you for the info!