All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

if the key=value the value has space it will quote it so splunk can parse it without any issue if there is no space splunk already knows key=value so he will parse the information without any issue .... See more...
if the key=value the value has space it will quote it so splunk can parse it without any issue if there is no space splunk already knows key=value so he will parse the information without any issue .   @PickleRick thanks for your helping now i'm facing big problem regarding batchadding
OK. That's one way to do it. Be aware thought that it probably will break if you get quotes in your field values.
Hello guys, I am quite new on the topic so I really need tyour help ^_^. I am ingesting Zscaler logs in a Splunk Cloud instance using a HeavyForwarder and TCP Inputs. As for AUTH logs the volume ... See more...
Hello guys, I am quite new on the topic so I really need tyour help ^_^. I am ingesting Zscaler logs in a Splunk Cloud instance using a HeavyForwarder and TCP Inputs. As for AUTH logs the volume is huge, we want to filter logs by limiting logs on following conditions: if one user is logging in one application today, all following logs for this user logging in that application in this specific day (month/date/year) would be discarded and we would start the ingesting next day using the same conditions. I hope this is pretty clear. I know that this can be done in prop.conf and transform.conf but I am not sure on how I should build the string. Thank you in advance. 
Can you share more details on which port you're trying to access?
I did this regex using SEDCMD on HF before sending data to indexers  s/(\w+)=([^\s"][^"\r\n=]*\s[^\r\n=]*)(?=\s|$)/\1="\2"/g    
Interesting approach. Out of sheer curiosity - what SEDCMD did you use?
1. Just because an app reports a particular version of a library, doesn't mean that it's not been patched (see debian and its backporting practice). 2. This particular vulnerability is far from crit... See more...
1. Just because an app reports a particular version of a library, doesn't mean that it's not been patched (see debian and its backporting practice). 2. This particular vulnerability is far from critical. "However, only applications that directly call the SSL_select_next_proto function with a 0 length list of supported client protocols are affected by this issue. This would normally never be a valid scenario and is typically not under attacker control but may occur by accident in the case of a configuration or programming error in the calling application. " Don't believe everything Nessus/Rapid7/OpenVAS/whatever says.
Did you manage to solve this
i fixed the issue by using regex With SEDCMD command on HF to fix the parsing and now everything is good   thanks for help @PickleRick 
Hello @shai, You can find the SplunkCloud root CA from the Universal Forwarder package present on your SplunkCloud search head. It gives you a forwarder package with preconfigured outputs to forward... See more...
Hello @shai, You can find the SplunkCloud root CA from the Universal Forwarder package present on your SplunkCloud search head. It gives you a forwarder package with preconfigured outputs to forward the data to SplunkCloud indexers. Within the same app, you can find the certificates that you need to append your self signed ones with. The package name should go something like this - 100_<<stack_name>>_splunkcloud   Thanks, Tejas. 
So it turns out the SQL doesn't write the entire event at once and Splunk therefore only reads part of the event. It worked in our TEST because I dumped the log file and therefore the entire events... See more...
So it turns out the SQL doesn't write the entire event at once and Splunk therefore only reads part of the event. It worked in our TEST because I dumped the log file and therefore the entire events were there. The solution was : multiline_event_extra_waittime = true time_before_close = 10
Hi Ryan, Thanks for checking by, i am still trying to figure out the issue yyyyyyyy000:/opt/appdynamics/machine-agent/jre/bin # ./keytool -printcert -sslserver cxxxxx.saas.appdynamics.com:443 keyt... See more...
Hi Ryan, Thanks for checking by, i am still trying to figure out the issue yyyyyyyy000:/opt/appdynamics/machine-agent/jre/bin # ./keytool -printcert -sslserver cxxxxx.saas.appdynamics.com:443 keytool error: java.lang.Exception: No certificate from the SSL server Support suggested to check with network team for any block, currently, i am into it
@PickleRick  Your comments helped. I  was applying this on the UF level and changing to indexers made it work. Thanks
Hi @jmartens , I just checked. Yes, for 9.3.x branch, the fix is in version 9.3.1.  Hope it helps!  
Trying to use splunkcloud, I get The connection has timed out An error occurred during a connection to prd-p-xauy6.splunkcloud.com. Seems to be an SSL cert error because of strict checking. Is the... See more...
Trying to use splunkcloud, I get The connection has timed out An error occurred during a connection to prd-p-xauy6.splunkcloud.com. Seems to be an SSL cert error because of strict checking. Is there a solution?
Using append is almost never the right solution - you are performing the same search three times and just collecting bits of info each time - this can be done in one search   index="<indexid>" Appi... See more...
Using append is almost never the right solution - you are performing the same search three times and just collecting bits of info each time - this can be done in one search   index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | eval success_time=if(searchmatch("Completed invokexPressionJob and obtained queue id ::"), _time, null()) | rex field=_raw "\s(?P<level>[^\/]+)\s\[main\]" | stats latest(_time) as latest_time latest(success_time) as success_time sum(eval(if(level="ERROR",1, 0))) as errors | convert ctime(latest_time) | convert ctime(success_time)   success_time is determined if the event matches the criteria wanted and errors are calculated if the level is ERROR. Not sure what you're trying to do with the final append with Print Job on a new row.
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP ... See more...
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP picture of Pods) into Splunk?  
Hi @Fadil.CK, Thanks for asking your question on the Community. We had some spam issues, so the community has been on read-only mode for the past few days, not allowing other members to reply.  ... See more...
Hi @Fadil.CK, Thanks for asking your question on the Community. We had some spam issues, so the community has been on read-only mode for the past few days, not allowing other members to reply.  Did you happen to find a solution you can share here? If you still need help, you can reach out to AppD Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read ... 
Hi @Dhinagar.Devapalan, Thanks for asking your question on the Community. We had some spam issues so the community has been on read-only mode for the past few days, not giving other members a chanc... See more...
Hi @Dhinagar.Devapalan, Thanks for asking your question on the Community. We had some spam issues so the community has been on read-only mode for the past few days, not giving other members a chance to reply.  Did you happen to find a solution you can share here? If you still need help, you can reach out to AppD Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
Hi @Jeffrey.Escamilla, I know we had some issues with community access last week. You can come back to comment and create content again. With that in mind, if Mario's answer helped you out, plea... See more...
Hi @Jeffrey.Escamilla, I know we had some issues with community access last week. You can come back to comment and create content again. With that in mind, if Mario's answer helped you out, please click the 'Accept as Solution' button on the reply that helped. If you need more help, reply back to keep the conversation going.