You could try something like this <drilldown>
<condition field="Name">
<eval token="link">if(isnull($row.url$),"","https://$row.url|n$"</eval>
<link target=...
See more...
You could try something like this <drilldown>
<condition field="Name">
<eval token="link">if(isnull($row.url$),"","https://$row.url|n$"</eval>
<link target="_blank">$link$</link>
</condition>
</drilldown>
Hi @mohammadnreda, I would recommend following some general steps to ingest your logs via Syslog. There are multiple ways to get Syslog data into Splunk, and the current best practice is to use "S...
See more...
Hi @mohammadnreda, I would recommend following some general steps to ingest your logs via Syslog. There are multiple ways to get Syslog data into Splunk, and the current best practice is to use "Splunk Connect for Syslog (SC4S)." This is a containerized Syslog-ng server with a configuration framework designed to simplify the process of getting Syslog data into both Splunk Enterprise and Splunk Cloud. If you already have a dedicated Syslog server (such as rsyslog or Syslog-ng), you simply need to enable Syslog forwarding from your Sangfor Firewall to the Syslog collector. From there, use a Universal Forwarder instance to read and forward your Syslogs to an indexer. Some useful links: Data collection architecture - Splunk Lantern Splunk Connect for Syslog | Splunkbase Syslog - Splunk Lantern Using Syslog-ng with Splunk | Splunk There is also an older post about Sangfor Firewall (answered by gcusello) that might be helpful for you: How to Onboard a device in Splunk? - Splunk Community best regards,
Hi @mohammadnreda, I suppose that you're receivig your firewall logs by syslog. So you have to create your custom add-on to parse your logs. If you need to use them in ES or ITSI you have also to ...
See more...
Hi @mohammadnreda, I suppose that you're receivig your firewall logs by syslog. So you have to create your custom add-on to parse your logs. If you need to use them in ES or ITSI you have also to nomalize them and the Splunk Add-On Builder (https://splunkbase.splunk.com/app/2962) can help you in normalization. If instead you have only tro monitor your firewalls, you can only parse your logs to extract all the relevant fields to use in dashboards. Anyway, my hint is to normalize your logs to have a custom CIM 4.x compliant add-on. Ciao. Giuseppe
I'm not sure what your problem is, but if it's related to orphaned private object, you'll need to re-create locally the deleted account and assign the object to another user.
Hi There, We are using the JIRA service desk add-on to open JSM tickets from splunk ES correlation search alerts. I found the docs how to set up the Add-on via REST API. ( https://ta-jira-service-de...
See more...
Hi There, We are using the JIRA service desk add-on to open JSM tickets from splunk ES correlation search alerts. I found the docs how to set up the Add-on via REST API. ( https://ta-jira-service-desk-simple-addon.readthedocs.io/en/latest/configuration.html#configuring-via-rest-api ) My question is, is it possible to use the REST API to configure the response action itself for every correlation search?
Hi @karthi2809, surely dashboard performances can be improved using base search, but always the best approach to have a performant dashbaord is working to have having performant searches. If you ca...
See more...
Hi @karthi2809, surely dashboard performances can be improved using base search, but always the best approach to have a performant dashbaord is working to have having performant searches. If you can, you could use all the ways to accelerate searches: use of tstats (when possible), accelerated Data Models, use of Summary indexes, if possible use of reports (especially when the dashboard is used by many users). The way to improve performances of your searches depends on the searches themselves. Ciao. Giuseppe
Well, I finally found what was missing. There's another certificate for the web interface in /opt/splunk/etc/auth/splunkweb I did the same as the other certificate (rename it to .old and restar...
See more...
Well, I finally found what was missing. There's another certificate for the web interface in /opt/splunk/etc/auth/splunkweb I did the same as the other certificate (rename it to .old and restart the service) and it automatically recreated a new updated certificate.
Hi All, I've been working on a dashboard in Splunk and I am noticing that it takes a considerable time amount of time to load. How to optimize the performance of my dashboard. 1.created most of t...
See more...
Hi All, I've been working on a dashboard in Splunk and I am noticing that it takes a considerable time amount of time to load. How to optimize the performance of my dashboard. 1.created most of the queries in base search. 2.How to make panels as reports. If we made as report will dashboard will more effective?. And I am using dynamic search in my dashboard. Could you please provide some tips or some example to improve the speed and performance of my Splunk dashboard. Thanks, Karthi
Great, I'm glad to hear that this solution was helpful for your use case happy splunking and best regards ; ) P.S.: Karma Points are always appreciated
So we have an internal load balancer that distributes HEC requests between 2 heavy forwarders. HEC is working fine and all but a small fraction of the requests are not making it to the heavy forwarde...
See more...
So we have an internal load balancer that distributes HEC requests between 2 heavy forwarders. HEC is working fine and all but a small fraction of the requests are not making it to the heavy forwarders. The sender of the events get the 503 error below: upstream connect error or disconnect/reset before headers. reset reason: connection termination while the internal load balancer get this error: backend_connection_closed_before_data_sent_to_client What really baffles me is that I couldn't find any error logs in Splunk that might be connected to this issue. There's also no indication that our heavy forwarders are hitting their queue limits. I even tried increasing the max queue size of certain queues including that of the HEC input in question but even that didn't help at all. Is there any other stuff that I can check to help me pin point the cause of this problem?
Hi, so I also had the same problem. I tested several setups and what worked was the solution provided by MaverickT. Just create a GPO and add the virtual Account to the "Event Log Readers" Group. Th...
See more...
Hi, so I also had the same problem. I tested several setups and what worked was the solution provided by MaverickT. Just create a GPO and add the virtual Account to the "Event Log Readers" Group. This does the trick. It seems that the privilege "SeSecurityPrivilege" isnt enough to read the sysmon event log. Which is weird, because all the other logs are readable. I can read power shell logs with this settings, but not the sysmon logs.
We noticed this morning that all the certificates for our Splunk servers are expired since a week (discovered whilst investigating why KVStore stopped this weekend). I followed recommendation from ...
See more...
We noticed this morning that all the certificates for our Splunk servers are expired since a week (discovered whilst investigating why KVStore stopped this weekend). I followed recommendation from other community ask by renaming server.pem to server.pem.old and restarting the Splunk service to create a new one. It correctly creates a new server.pem with a valid expiration date, however it still displays the old cerficate in my browser. I already checked with btool, and it seems fine (pointing to server.pem). I also already checked web.conf and tried to manually indicate the file path but it's still not working... Am I missing something?
Hi @woodlandrelic , Hope this message finds you well. I have recently moved from out of a Splunk developer role to an admin role. I have to build a cluster environment out of scratch. I have the b...
See more...
Hi @woodlandrelic , Hope this message finds you well. I have recently moved from out of a Splunk developer role to an admin role. I have to build a cluster environment out of scratch. I have the basic understanding of a clustered environment but haven't setup yet. Could you please guide me how can I start. Like what type of knowledge/ information gathering need to do with the client or customer before head. Also if there is any procedure/ order of components to follow. It will be really helpful for me. Thanks in advance