All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Utkc137 , then, where do you located the add-on? it should be in the first HF data passed through or (if HFs aren't present) in the Indexers. Ciao. Giuseppe
Hi @Utkc137 , did you tried with the sourcetype "bluecoat"? that should be the one you assigned to your input. Ciao. Giuseppe
Also, the sourcetype I used originally is also mentioned in the inputs.conf .. and remains the same until the logs are ingested
Just tested using source in the props stanza name (source is define in inputs.conf) and it's still picking up the index time as the timestamp
Hi @Utkc137 , there's a priority in conf files reading and in that add-on there are some tranformations, so probably the sourcetype you added isn't present when the local file is read and created af... See more...
Hi @Utkc137 , there's a priority in conf files reading and in that add-on there are some tranformations, so probably the sourcetype you added isn't present when the local file is read and created after using a transformation, see the default sourcetype and try adding your configuration to this sourcetype. Ciao. Giuseppe
Hi All, I have a bluecoat proxy log source for which I am using the official splunk addon. However, I noticed that the timestamp is not being parsed for from the logs and instead the index time is b... See more...
Hi All, I have a bluecoat proxy log source for which I am using the official splunk addon. However, I noticed that the timestamp is not being parsed for from the logs and instead the index time is being taken. To remedy this, I added a custom props in ../etc/apps/Splunk_TA_bluecoat-proxysg/local, with the following stanza: [bluecoat:proxysg:access:syslog] TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=^   Rest of the configuration is the same as it is in the base app (Splunk_TA_bluecoat-proxysg).   During testing, when I upload logs through Add Data, the the time stamp is being properly parsed. However when I start using SplunkTCP to ingest the data, the timestamp extraction stops working.  Note that in both of the scenarios, the rest of the parsing configurations (field extraction and mapping is working just fine). Troubleshooting: 1. I tried to check with btool for props .. I can see the custom stanza I added there. 2. Tried putting the props in ../etc/system/local 3. Restarted Splunk multiple times. Any ideas that I can try to get this to work? or where should I look at? Sample Log: 2024-12-03 07:30:06 9 172.24.126.56 - - - - "None" - policy_denied DENIED "Suspicious" - 200 TCP_ACCELERATED CONNECT - tcp beyondwords-h0e8gjgjaqe0egb7.a03.azurefd.net 443 / - - "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0" 172.29.184.14 39 294 - - - - - "none" "none" "none" 7 - - 631d69b45739e3b6-00000000df56e125-00000000674eb37e - - Splunk Search (Streaming data): Splunk Search (uploaded data):    
Hi @rahusri2 , let m,e understand: you have a Forwarder (UF or HF) using the outputs.conf you shared to forward logs to Splunk C loud that receives syslogs (using UDP on port 8125), is it correct? ... See more...
Hi @rahusri2 , let m,e understand: you have a Forwarder (UF or HF) using the outputs.conf you shared to forward logs to Splunk C loud that receives syslogs (using UDP on port 8125), is it correct? At first my hint is to not using Splunk as receiver but an rsyslog (or syslog-ng or SC4S) to receive syslogs because in this way, you can continue to receive syslogs even if Splunk is down. Then you can us an UF or an HF to read these files and forward them to Splunk Cloud.ù In addition, you could have at least two (or more) UFs to receive syslogs with a Load Balancer in front to have a real HA and don't lose data. But the error you have is probably another one: to send logs to Splunk Cloud from a Forwarder, you have to download an app (called forwarder) from your Splunk Cloud instance, containing the certificates and the passwords to conne ct to Splunk Cloud, you cannot send logs without it. for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/UsingforwardingagentsCloud Ciao. Giuseppe
Hello There,    I'm hvaing issues in multiselect input dropdown <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice va... See more...
Hello There,    I'm hvaing issues in multiselect input dropdown <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice value="03">No Site Selected</choice> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query> | inputlookup site_ids.csv | search displayname != "ABN8" AND displayname != "ABR8" AND displayname != "ABRA7" AND displayname != "ABMAN2" </query> <earliest>-15m</earliest> <latest>now</latest> </search> <delimiter>_fc7 OR index=</delimiter> <suffix>_fc7</suffix> <default>03</default> <initialValue>03</initialValue> <change> <eval token="form.siteid">case(mvcount('form.siteid') == 2 AND mvindex('form.siteid', 0) == "03", mvindex('form.siteid', 1), mvfind('form.siteid', "\\*") == mvcount('form.siteid') - 1, "03", true(), 'form.siteid')</eval> </change> </input> <input type="multiselect" token="system_number" searchWhenChanged="true"> <label>Node</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>Node</fieldForLabel> <fieldForValue>sys_number</fieldForValue> <change> <eval token="form.system_number">case(mvcount('form.system_number') == 2 AND mvindex('form.system_number', 0) == "*", mvindex('form.system_number', 1), mvfind('form.system_number', "\\*") == mvcount('form.system,_number') - 1, "*", true(), 'form.system_number')</eval> </change> <search> <query>| inputlookup node.csv | fields site prefix Node sys_number | eval token_value = "$siteid$" | eval site_val = if(match(token_value, "OR\s*index="), split(replace(token_value, "\s*OR\s*index=\s*", ","), ","), token_value) | where prefix=site_val | dedup Node | table Node sys_number</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <prefix>"</prefix> <suffix>"</suffix> <valueSuffix>","</valueSuffix> <delimiter> </delimiter> </input> the problem here is, I need to have field for label as Node but , When I'm selecting an value in siteid, then selecting a value in Node, after that selecting the secong value in Siteid, the node change its value to sys_number, but actually it should be Node, as we mentioned fields label as Node only but it changes to sys_number.    this only happens after selecting any values in Node, if we select values in siteid, the Node behaved wierd. Other eise its fine, Thanks!
Hello Community, I am trying to create a connection so that I can sent metric running on 8125 port UDP on Splunk Enterprise (running locally) to Spunk Cloud (running prd-p-7mh2z.splunkcloud.com) but... See more...
Hello Community, I am trying to create a connection so that I can sent metric running on 8125 port UDP on Splunk Enterprise (running locally) to Spunk Cloud (running prd-p-7mh2z.splunkcloud.com) but I am getting below error. As I need to send UDP data running on port 8125, I am using heavy forwarder instead of universal forwarder and I have configured heavy forwarder pointing to "prd-p-7mh2z.splunkcloud.com:9997" Getting error on the dashboard ``` The TCP output processor has paused the data flow. Forwarding to host_dest=prd-p-7mh2z.splunkcloud.com inside output group default-autolb-group from host_src=rahusri2s-MacBook-Pro.local has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. ```   cat /Applications/splunk/etc/system/local/outputs.conf Password: [tcpout] defaultGroup = default-autolb-group indexAndForward = 1 [tcpout:default-autolb-group] server = prd-p-7mh2z.splunkcloud.com:9997 [tcpout-server://prd-p-7mh2z.splunkcloud.com:9997] # cat /Applications/splunk/etc/apps/search/local/inputs.conf [splunktcp://9997] connection_host = ip [udp://8125] connection_host = dns host = rahusri2s-MacBook-Pro.local index = 4_dec_8125_udp sourcetype = statsd   Thanks in advance. #splunk 
Thanks @dural_yyz.. But my user has a role which doesnt have the edit_view_html capablity. But still he's able to create dashboard.  
thank you for your reply The client value is a value that always exists and is confirmed in splunk, but the value is not received in the error ticket. In addition, other values ​​(time, reason) cann... See more...
thank you for your reply The client value is a value that always exists and is confirmed in splunk, but the value is not received in the error ticket. In addition, other values ​​(time, reason) cannot be received. Currently, I am receiving multiple values ​​such as customer, time, reason, etc. as a Jira custom field, but I am wondering if even one value is incorrect (e.g. when the reason field exceeds the length limit), I will not be able to receive all mapped values. However, I created a custom field with the same type (multi-line) as the Jira description field and tested it by entering the same value, but the value was not received only in the custom field. Example photos are below:   [normal ticket]       [erro ticket]          
Hi   We have Splunk Enterprise installed on a Windows computer which does not have direct access to the internet. To access the internet on that computer, usually we open a browser like Chrome or E... See more...
Hi   We have Splunk Enterprise installed on a Windows computer which does not have direct access to the internet. To access the internet on that computer, usually we open a browser like Chrome or Edge then enter some required website (example : https:\\www.yahoomail.com) and press enter. Then a pop up will come on the browser which will ask us to enter the credentials. This popup will have our internet proxy server Url with port number that is https://myinternetserver01.mydomain.com:4443 and a option to enter username and password as attached in the screenshot. Once we enter the credentials it will allow us to browse any website on that computer until we log out from that computer. Due to this restrictions, we are unable to use some of the splunk add ons which requires internet connection. We tried many options using proxy settings but none of them are working.   Can some one please guide us where can we input this internet server URL, Port and credentials so that Splunk will have a direct connection to internet and we can use all spunk add on which needs internet.      
hi, thank you for your reply In the error ticket, all parts mapped to Jira customfield (e.g. "customfield_10211": "$result.client$") are not included. In regular tickets, customer values, time, etc... See more...
hi, thank you for your reply In the error ticket, all parts mapped to Jira customfield (e.g. "customfield_10211": "$result.client$") are not included. In regular tickets, customer values, time, etc. are always present, but in error tickets, none of the values ​​mapped to customfield are coming in, even though they are confirmed in Splunk. If there is a problem with even one customfield value, can I not retrieve other values ​​as well? (For example, there are two Jira custom fields, client and reason. However, the reason field is a single line type, so there is a length limit. If the length limit is exceeded, i will not be able to retrieve values ​​from not only the reason field but also the client field...)
Sure, thanks for your reply @marnall  >> Yes, this is a security recommendation added recently   May i know if you or anybody got some more details about this security recommendation pls, thanks. 
Sure @PickleRick  just asked on #docs and waiting with fingers crossed
This is probably an INDEXED_EXTRACTIONS issue, see these, which should help https://community.splunk.com/t5/Splunk-Search/Why-is-my-search-on-JSON-data-producing-duplicate-results-for/m-p/520686 ht... See more...
This is probably an INDEXED_EXTRACTIONS issue, see these, which should help https://community.splunk.com/t5/Splunk-Search/Why-is-my-search-on-JSON-data-producing-duplicate-results-for/m-p/520686 https://community.splunk.com/t5/Getting-Data-In/Bug-Why-are-there-duplicate-values-with-INDEXED-EXTRACTION/m-p/676784 https://community.splunk.com/t5/Getting-Data-In/Why-would-INDEXED-EXTRACTIONS-JSON-in-props-conf-be-resulting-in/m-p/317327
foreach is immensely powerful and leads you to a place where in your SPL you can use good field naming conventions to create concise, if a little more obtuse, logic. Here it's using numbers, but you ... See more...
foreach is immensely powerful and leads you to a place where in your SPL you can use good field naming conventions to create concise, if a little more obtuse, logic. Here it's using numbers, but you typically use it with fields and then wildcards then a good naming strategy become important as it allows you to handle unknown field names.
Hi as you have SCP in you, you have one additional option. You could use Splunk Edge Processor to get syslog feed in. Of course you need LB before those endpoint to get HA. But probably the easiest ... See more...
Hi as you have SCP in you, you have one additional option. You could use Splunk Edge Processor to get syslog feed in. Of course you need LB before those endpoint to get HA. But probably the easiest way is use SC4S as @gcusello said. You could run it on docker or even k8s if you are familiar with it. r. Ismo
Hi there is no HMAC or similar method to ensure that logs haven’t been tampered in Splunk. Of course you should use TLS in transport method, but it only ensures that stream is ok, not that original ... See more...
Hi there is no HMAC or similar method to ensure that logs haven’t been tampered in Splunk. Of course you should use TLS in transport method, but it only ensures that stream is ok, not that original events are exactly what they have when they are originally written into disk. If you’re needing this kind of functionality you should use e.g HEC to send those events directly from your logger to Splunk without writing those into disk on source side. r. Ismo
Hi the preferred method is set up syslog server (rsyslog or syslog-ng) or use SC4C to get logs from syslog sources and then send those logs from it by UF or in SC4C case it sends those via HEC to yo... See more...
Hi the preferred method is set up syslog server (rsyslog or syslog-ng) or use SC4C to get logs from syslog sources and then send those logs from it by UF or in SC4C case it sends those via HEC to your cloud instance. r. Ismo