@VatsalJagani 1. Duplicate logs are ingested into splunk, we tried to change the checkpoint file value, even after that at 2 am duplicated are ingested 2. We are using python script to ingest TYK ...
See more...
@VatsalJagani 1. Duplicate logs are ingested into splunk, we tried to change the checkpoint file value, even after that at 2 am duplicated are ingested 2. We are using python script to ingest TYK mongoDB logs into splunk
I am not particularly concerned about the vulnerability right now. But this old OpenSSL version is causing problems when I try to develop new apps. I know it might not be a Splunk problem. urllib3 v2...
See more...
I am not particularly concerned about the vulnerability right now. But this old OpenSSL version is causing problems when I try to develop new apps. I know it might not be a Splunk problem. urllib3 v2 is a dependency of a package that I need to use. As I understand this version of OpenSSL is not supported by the library or newer version of Python. The error message is: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.0.2zi-fips 1 Aug 2023'. See: https://github.com/urllib3/urllib3/issues/2168
i have used this approach to forward logs from specific index to third-party system in my case Qradar so i need to do the same forwarding specific index using syslog not TCP cuz it's takes time (...
See more...
i have used this approach to forward logs from specific index to third-party system in my case Qradar so i need to do the same forwarding specific index using syslog not TCP cuz it's takes time ( i did tcpdump to figure that) this approach i follow # props.conf
[default]
TRANSFORMS-send_foo_to_remote_siem
# transforms.conf
[send_foo_to_remote_siem]
REGEX = foo
SOURCE_KEY = _MetaData:Index
DEST_KEY = _TCP_ROUTING
FORMAT = remote_siem
# outputs.conf
[tcpout:remote_siem]
server = remotesiem:1234
sendCookedData = false thanks
@tscroggins i did the following ... outputs.conf [tcpout:tmao] server = xxx.xxx.xxx.xxx:9997 #Forward data for the "myindex" index forwardedindex.0.whitelist = tmao sendCookedData = false...
See more...
@tscroggins i did the following ... outputs.conf [tcpout:tmao] server = xxx.xxx.xxx.xxx:9997 #Forward data for the "myindex" index forwardedindex.0.whitelist = tmao sendCookedData = false props.conf [source::udp:1517] TRANSFORMS-routing = route_to_tmao_index transform.conf [route_to_tmao_index] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = tmao is my configuration good because i want to forward tmao index to another third-party system ... thanks
I have 2 queries which is having sub search for input look up in each. Query 1 This query outputs the timechart for for CPU1. It will count each processes listed in the CPU1 field of the test.csv. ...
See more...
I have 2 queries which is having sub search for input look up in each. Query 1 This query outputs the timechart for for CPU1. It will count each processes listed in the CPU1 field of the test.csv. index=custom | eval SEP=split(_raw,"|"), eval CPU1=trim(mvindex(SEP,1)) | bin _time span=1m | stats count(CPU1) as CPU1_COUNT by _time CPU1 | search [ | input lookup test.csv | fields CPU1 | fillnull value = 0 | format ] Query 2 This query outputs the timechart for for CPU2. It will count each processes listed in the CPU2 field of the test.csv. index=custom | eval SEP=split(_raw,"|"), eval CPU2=trim(mvindex(SEP,1)) | bin _time span=1m | stats count(CPU2) as CPU2_COUNT by _time CPU2 | search [ | input lookup test.csv | fields CPU2 | fillnull value = 0 | format ] test.csv (sample) CPU1 CPU2 CPU3 process_a process_b process_c process_d process_e process_f process_g process_i process_h What I want is to display the CPU1 and CPU2 time chart in one chart . Any advice on that will be a great help. Thanks
Hi @ViniciusMariano, In Simple XML, you can use row and panel elements to group inputs and visualizations. To display objects side-by-side, place them in separate panel elements. To display objects ...
See more...
Hi @ViniciusMariano, In Simple XML, you can use row and panel elements to group inputs and visualizations. To display objects side-by-side, place them in separate panel elements. To display objects stacked top-to-bottom, place them in the same panel element. Combine panel elements within row elements for mixed layouts. <form version="1.1" theme="light">
<label>Quality Management Storage Rework</label>
<fieldset submitButton="false"></fieldset>
<row>
<panel>
<input type="dropdown" token="region_tok">
<label>Region</label>
<choice value="All">All</choice>
<default>All</default>
<initialValue>All</initialValue>
</input>
<input type="dropdown" token="info_tok">
<label>Info</label>
<choice value="General">General</choice>
<default>General</default>
<initialValue>General</initialValue>
</input>
<chart>
<title>Chart</title>
<search>
<query>| makeresults count=10
| rename count as x
| eval y=random()%10
| table x y</query>
<earliest>-24h@h</earliest>
<latest>now</latest>
</search>
<option name="charting.chart">column</option>
<option name="charting.drilldown">none</option>
</chart>
</panel>
<panel>
<table>
<title>Table</title>
<search>
<query>| makeresults count=10
| eval x=random()%10
| table _time x</query>
<earliest>-24h@h</earliest>
<latest>now</latest>
</search>
<option name="drilldown">none</option>
</table>
</panel>
</row>
</form>
Hi @alfredoh14, The "noise" is the Base64 certificate data. To combine multiple PEM certificate files into a single file, simply concatenate the files: -----BEGIN CERTIFICATE-----
...
-----END CERT...
See more...
Hi @alfredoh14, The "noise" is the Base64 certificate data. To combine multiple PEM certificate files into a single file, simply concatenate the files: -----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE----- General information on PEM is readily available online. Splunk also provides a quick tutorial at https://docs.splunk.com/Documentation/Splunk/latest/Security/HowtoprepareyoursignedcertificatesforSplunk#How_to_configure_a_certificate_chain. You can reference the combined PEM file in the server.conf [sslConfig] stanza sslRootCAPath setting, e.g.: # $SPLUNK_HOME\etc\system\local\server.conf
[sslConfig]
sslRootCAPath = X:\path\to\cacerts.pem
Hi @KhalidAlharthi, The basic process is documented at https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd. Summarizing: Define a props.conf stanza matc...
See more...
Hi @KhalidAlharthi, The basic process is documented at https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd. Summarizing: Define a props.conf stanza matching your source data. Use [default] to match data at the index level. Define a transforms.conf stanza matching your index and targeting one or more output groups. Define an outputs.conf stanza for your remote system. For example, to redirect all index=foo events from a heavy forwarder to a remote SIEM on port 1234: # props.conf
[default]
TRANSFORMS-send_foo_to_remote_siem
# transforms.conf
[send_foo_to_remote_siem]
REGEX = foo
SOURCE_KEY = _MetaData:Index
DEST_KEY = _TCP_ROUTING
FORMAT = remote_siem
# outputs.conf
[tcpout:remote_siem]
server = remotesiem:1234
sendCookedData = false If defined on an indexer, the events will indexed locally and forwarded. Note that when using [default], all events will inspected. The exact settings you need depend on your Splunk architecture and the remote SIEM. I would start by reading Splunk Enterprise Forwarding Data at https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Aboutforwardingandreceivingdata and asking new questions as needed.
Hi @ianthomas, The Event Timeline Viz add-on is developed and supported by @danspav. You can contact them directly via email (see Splunkbase) or tag them as I have here for support.
You could try using tokens as parts of URLs. If all else fails, you could of course try to write custom JS (if it's a SimpleXML dashboard; won't work for Dashboard Studio dashboards AFAIR) but it's n...
See more...
You could try using tokens as parts of URLs. If all else fails, you could of course try to write custom JS (if it's a SimpleXML dashboard; won't work for Dashboard Studio dashboards AFAIR) but it's not a pretty solution.
it's a very broad description. And don't worry about lack of Enterprise Security app. Yes, it's great and has many useful functionalities but you can do quite a lot with Splunk Enterprise on its own...
See more...
it's a very broad description. And don't worry about lack of Enterprise Security app. Yes, it's great and has many useful functionalities but you can do quite a lot with Splunk Enterprise on its own. The "problem" here is that you don't know what you need. Think what data you have and what can tell you of a possible attack. You can use Security Essentials app for inspiration. But please, don't do "checkbox security" meaning "just write something that seems to satisfy some literal requirement in the least-effort way possible" so that you can cross it off your todo list. That actually impairs your security posture.
The Ingestion Latency indicator is based on "checkpoint" files generated by the forwarders. The file (var/spool/tracker/tracker.log) is periodically generated on a UF and contains a timestamp which i...
See more...
The Ingestion Latency indicator is based on "checkpoint" files generated by the forwarders. The file (var/spool/tracker/tracker.log) is periodically generated on a UF and contains a timestamp which is compared by Splunk aftern ingestion to see how long it took for that file to reach the indexer. There is one possibility when the alert on latency is a false positive - sometimes the input doesn't properly delete the file when ingesting its contents so new timestamps are appended to the end of the file. It happened to me once or twice. But other than that latency warning simply means that it takes "too long" for the data to get from being read by UF to bing indexed by the indexers. The possible reasons include: 1. Load on the forwarder (this is usually not an issue if you're ingesting logs from a server which normally does some other production work and you only ingest its own logs but might be an issue if you have a "log gatherer" setup receiving logs from a wide environment. 2. Throttling on output due to bandwidth limits. 3. Need to ingest a big backlog of events (can happen if the UF wasn't running for some time or if you're installing a fresh UF on a host which was running and already produced logs which you want ingested). 4. Connectivity/configuration problems preventng UF from sending the buffered data to indexers. 5. Blocked receivers due to performance problems.
It seems perfectly acceleratable. The bin command is a streaming one so the requirements for only streaming commands before first transforming command is fulfilled. You could try to use summary inde...
See more...
It seems perfectly acceleratable. The bin command is a streaming one so the requirements for only streaming commands before first transforming command is fulfilled. You could try to use summary indexing here though instead of report acceleration - this would give you more flexibility in using the summarized data later should you need to reference it in other searches.
Hi community, I'm wondering if it's possible to forward specific index in splunk to other third-party systems or SIEM such as Qradar or any other SIEM i have read something about HF that it's p...
See more...
Hi community, I'm wondering if it's possible to forward specific index in splunk to other third-party systems or SIEM such as Qradar or any other SIEM i have read something about HF that it's possible but i don't understand it fully if Yes, please give me approach to do this .. thanks
OK, but what user/password are you putting in? It should be your splunk.com account (possibly the one you use for posting on Answers) because underneath the server logs in to splunkbase with your cre...
See more...
OK, but what user/password are you putting in? It should be your splunk.com account (possibly the one you use for posting on Answers) because underneath the server logs in to splunkbase with your credentials. Of course you need to have access to internet from your Splunk server (I assume you use an all-in-one setup).
Delta is a relatively simple command - just calculates difference from previous value. Nothing more, nothing less. If you want to track the differences separately for - for example - different devic...
See more...
Delta is a relatively simple command - just calculates difference from previous value. Nothing more, nothing less. If you want to track the differences separately for - for example - different devices, you need to use streamstats to copy over previous value of a given field X separetely for each value of field Y (or a combination of more fields). | streamstats current=f window=1 values(myfield) as old_myfield by splitfield Now you can just calculate the difference of myfield and old_myfield.
hi I manage to monitor the servers divided into services via the ITSI. However, I would like to receive email alerts when some of my servers change state, either inactive or unstable, for better re...
See more...
hi I manage to monitor the servers divided into services via the ITSI. However, I would like to receive email alerts when some of my servers change state, either inactive or unstable, for better reactivity.
Until you can tell us what data you have, what field/value in that data indicates inactive and unstable entities, and how you want the output to look like, volunteers are not going to help you.