All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A few questions: Do you have a TA for the logs you are ingesting and are they set-up on all the needed Splunk components, check your DOCs? Looking at the _internal logs, do you see that Splunk ha... See more...
A few questions: Do you have a TA for the logs you are ingesting and are they set-up on all the needed Splunk components, check your DOCs? Looking at the _internal logs, do you see that Splunk has ingested them? Can you do a search for a string that exists in your logs across all you indexes and find any responsive logs, in the time you verified that the data was ingested?   Also, for syslog data in general it is simpler and more durable to forward data to a syslog server and have a UF monitor relevant files and then you set-up monitoring stanzas per host/data source. [monitor://var/log...whatever] whitelist = regex blacklist = regex host_segment = as needed crcSalt = <SOURCE> {as needed} sourcetype = syslog {or whatever you want} index = yourIndex   Consult also: How the Splunk platform handles syslog data over the UDP network protocol - Splunk Documentation
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began... See more...
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began to be sent to the index "_internal". Splunk version is 7.3.2..
I'm just saying you're putting the cart before the horse. You know _now_ that something happened. When? 10 minutes ago? 10 hours ago? 10 days ago? Do you know if you should react immediately and - fo... See more...
I'm just saying you're putting the cart before the horse. You know _now_ that something happened. When? 10 minutes ago? 10 hours ago? 10 days ago? Do you know if you should react immediately and - for example isolate the workstation to prevent the threat from spreading in your infrastructure or whether you should rather focus on searching where it already spread to? You're trying to solve a different problem than you have. If you have sources which can be lagging, you should acount for that with your searches so you don't have situations where you miss the data because it came and got indexed outside of your search window. But that's different than just happily letting your clocks run loose and try to guess your way around it. IMHO you're simply solving wrong problem. But of course YMMV. EDIT: Oh, and of course you have the possibility of using _indextime in your searches. It's just that _time is _the_ most important field about the event. PS: If you think _indextime is the most important field for you, just use DATETIME_CONFIG=current and be done with it.
Exactly what I needed. Thanks!
I can't, but at least I can catch that event with index time, correlate it with other security events and analyze to see a bigger picture. Things like that are expected in security world, and it's be... See more...
I can't, but at least I can catch that event with index time, correlate it with other security events and analyze to see a bigger picture. Things like that are expected in security world, and it's better to catch them with unreliable time than miss them. Being able to tell when something happened is not as critical as being able to tell it happened.  Missing such events may mean a lot of damage to the company. We are not asking for ES to be a time synchronization tool, but simply allowing to search on _indextime and _time would be incredibly useful. 
There are pros and cons of everything of course. But ES can't be - for example - a substitute for reliable time source and proper time synchronization. That's not what it's for. If you don't have a ... See more...
There are pros and cons of everything of course. But ES can't be - for example - a substitute for reliable time source and proper time synchronization. That's not what it's for. If you don't have a reliable time, how can you tell when the event really happened? If you have a case when the clock can be set to absolutely anything so you have to search All-Time, how can you tell when the event happened (not when it was reported)?
You are going to miss data if you are using event time for security alerting. Event time stamps are unreliable. We have seen event times 2 years in the future due to system clocks misconfigurations. ... See more...
You are going to miss data if you are using event time for security alerting. Event time stamps are unreliable. We have seen event times 2 years in the future due to system clocks misconfigurations. Event delays and outages are common. Our average delay is 20 minutes, SLA for delivery is 24 hours.  If we want to run security alerting every hour to reduce the dwell time, we have to look back 24 hours instead of 1 hour. If we are running over 1K security searches, that adds up. On top of that, always a chance of missing a misconfigured clock unless we check AllTime. Using the _indextime for alerting, and event time for analyzing the events would work perfect for our use case. Unfortunately, it seems to be not feasible with all the constraints in ES, so we have to run our searches for a very large time span to make sure we account for the event delays, we have to check future times, and we have to have an outage replay protocols. Very inconvenient, I wish we could just run searches on _indextime (every hour) with a  broader _time (24 hours) (not AllTime).   
Thanks everyone I will go through the upgrade route then.  It will be safer that way.
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it... See more...
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it seems they expect data from EDR solutions like CrowdStrike or Symantec, rather than local Linux audit logs. Does this mean there is no way to use the out-of-the-box use cases created in Security Essentials/Enterprise Security for Linux logs?   Thanks
This is fairly common and easy to correct. Stop Splunk Change the ownership of all Splunk files using the command (as root) chown -r splunk:splunk /opt/splunk Start Splunk
Did you restart Splunk after changing $JAVA_HOME? The info is stored in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/dbx_settings.conf,  See https://docs.splunk.com/Documentation/DBX/3.18.0/Dep... See more...
Did you restart Splunk after changing $JAVA_HOME? The info is stored in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/dbx_settings.conf,  See https://docs.splunk.com/Documentation/DBX/3.18.0/DeployDBX/settingsconfspec for a description of the contents.
Hi Jorob I saw this option as well. But what if we don't want to run the Splunk daemon in /etc/init.d? I mean, the problem should be well known by Splunk and since allmost a year we don't hear any ... See more...
Hi Jorob I saw this option as well. But what if we don't want to run the Splunk daemon in /etc/init.d? I mean, the problem should be well known by Splunk and since allmost a year we don't hear any improvements from them. I'm a little disappointed on Splunk's part that they don't describe a workaround in the docs or even look for the solution. It looks like nobody at Splunk cares about this problem. As I mentioned, I think it's a bad idea to have to install all universal forwarders in the “old” way just because Splunk Stream can't handle it. We are all eagerly awaiting Splunk's response. Greetings  
I am getting the same thing when trying to use Splunk Windows and AWS addons. I just installed them and when they try to load the page the page shows for a quick second and than see that error message
There is no guarantee that the first event for each sky_id has a value in catchup_updated_time, so the filldown can be pulling any value from the previous sky_id down. When the dedup is done, only th... See more...
There is no guarantee that the first event for each sky_id has a value in catchup_updated_time, so the filldown can be pulling any value from the previous sky_id down. When the dedup is done, only the first event for each sky_id is kept (which could have the wrong catchup_updated_time. Try either | sort -sky_id catchup_updated_time | filldown catchup_updated_time, sky_ui_timestamp or | sort -sky_id | eventstats values(catchup_updated_time) as catchup_updated_time, values(sky_ui_timestamp) as sky_ui_timestamp by sky_id
@N_K I would recommend that you make the input playbook capable of handling list items as inputs and doing the iteration inside the playbook as it will be the path of least resistance and put less st... See more...
@N_K I would recommend that you make the input playbook capable of handling list items as inputs and doing the iteration inside the playbook as it will be the path of least resistance and put less strain on the platform from a worker perspective. 
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and c... See more...
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and call the playbook for each item in that list, this requires using the phantom.playbook function. From what I can see, there is no way to retrieve the output of this playbook now, is that correct?   Example below: for item in prepare_data__post_list: phantom.playbook(playbook="local/__Post_To_Server", container={"id": int(container_id)}, inputs={"body": item, "headers": prepare_data__headers, "path": prepare_data__path})
index=abc sourcetype=abc_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<... See more...
index=abc sourcetype=abc_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<distributor_timestamp>\X+)\", sky_to_mq" | rex field=_raw "distributor_latency=\"(?<distributor_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sky sourcetype=Sky_WSS_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval ep_timestamp = strftime(strptime(ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y-%m-%d %H:%M:%S.%3N")] | join type=left sky_id [ search index=sky "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=distributor_timestamp "(?<distributor_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sky_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sky_id, event_id, booking_timestamp, distributor_timestamp, ep_timestamp, mx_status, operation, action, distributor_latency, ep_latency, portfolio_name, portfolio_entity | join type=left sky_id [ search index=sky sourcetype=sky_cashfx_catchup_logs "[WSS] - Trade Store has been updated" | rex field=_raw "Max Skylib TradeID: (?<sky_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S,%3N"), "%Y/%m/%d %H:%M:%S.%3N") | dedup sky_id sortby +_time | table sky_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") ```| eval wss_to_sky_latency = catchup_unix_time - booking_timestamp_unix``` | eval mq_to_sky_update_latency = catchup_unix_time - distributor_timestamp_unix | eval ep_timestamp = strftime(strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%3N"), "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp = strftime(strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N"), "%Y/%m/%d %H:%M:%S.%4N") | table trade_id, portfolio_name, portfolio_entity, sky_id, event_id, booking_timestamp, booking_timestamp_unix, distributor_timestamp, distributor_timestamp_unix, ep_timestamp, distributor_latency, ep_latency, catchup_updated_time, wss_to_sky_latency, mq_to_sky_update_latency, mx_status, operation, action, catchup_unix_time | rex field=trade_id "^\w+ (?<dealnumber>\d+)$" | join type=left dealnumber [ search index=wss "Sending message" source="/proj/flowfx/wss/FFXWS01P/log/MQ1.log" ```Exclude Far Legs of Swap Trades for first Iteration of Dash``` NOT "<swap_leg>2</swap_leg>" ```Exclude Cancels, Amends, Auxiliaries, Allocations, Blocks - allocated ``` NOT "<status>" ```Exclude MM Deals ``` NOT "<WSSMMTRADE>" | rex "\<transaction\>(?P<tid>.*?)\<\/transaction\>" | rex "\<deal_number\>(?P<dealnumber>.*?)\<\/deal_number\>" | rex "\<external_deal\>(?P<sourcesystemid>.*?)\<\/external_deal\>" | rex "\<cust_type\>(?P<custType>.*?)\<\/cust_type\>" | eval region=case(host == "pffxa01z", "Global", host == "pffxa02z", "China") | eval wssSendingTime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-processor.log" "INFO SLA FFX-Processor received" | rex "transactionId\=(?P<tid>.*?)\," | eval flowfxincomingtime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table flowfxincomingtime,tid, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime | eval wssSendingTimeUnix=strptime(wssSendingTime,"%Y-%m-%d %H:%M:%S.%Q") | eval flowfxincomingtimeUnix=strptime(flowfxincomingtime,"%Y-%m-%d %H:%M:%S.%Q") | eval timebetweenWssFlowfx = flowfxincomingtimeUnix - wssSendingTimeUnix | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-trade-sender-cim.log" "INFO SLA FFX-Trade-Sender sent" | rex "nearTransactionId\=(?P<tid>.*?)\," | eval CIMsendingTime=strftime(_time,"%Y/%m/%d %H:%M:%S.%Q") | eval MQ_available_time=strftime(_time - 7200, "%Y-%m-%d %H:%M:%S.%Q") | table CIMsendingTime,tid,MQ_available_time,booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix,CIMsendingTime, MQ_available_time, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, cim_latency, distributor_latency, ep_latency, catchup_latency, wss_to_sky_latency, distributor_to_sky_latency |sort - sky_id | join type=left sky_id [ search index=sky sourcetype=sky_webservices_logs source="D:\\SkyNet\\SkyWebService\\logs\\live-risk-stomp-broadcast.log" "maxskyid" | where maxskyid > 0 | dedup maxskyid | rename maxskyid as sky_id | eval sky_ui_timestamp=strftime(_time, "%Y/%m/%d %H:%M:%S.%3N") | table sky_id host sky_ui_timestamp ] | sort -sky_id | filldown catchup_updated_time, sky_ui_timestamp | eval mq_to_sky_update_latency = round(mq_to_sky_update_latency * 1000,0) | eval sky_ui_unix_time = strptime(sky_ui_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S.%3N") | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval wss_to_sky_latency = sky_ui_unix_time - booking_timestamp_unix | eval wss_to_sky_latency = round(wss_to_sky_latency * 1000,0) | eval CIMsendingTime_unix = strptime(CIMsendingTime, "%Y/%m/%d %H:%M:%S.%3Q") | eval distributor_to_sky_latency = sky_ui_unix_time - CIMsendingTime_unix | eval distributor_to_sky_latency = round(distributor_to_sky_latency * 1000,0) | eval cim_latency = CIMsendingTime_unix - booking_timestamp_unix | eval cim_latency = round(cim_latency * 1000,0) | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_latency = distributor_timestamp_unix - CIMsendingTime_unix | eval distributor_latency = round(distributor_latency * 1000,0) | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | eval ep_latency = round(ep_latency * 1000,0) | eval catchup_latency = catchup_unix_time - ep_timestamp_unix | eval catchup_latency = round(catchup_latency * 1000,0) | eval ui_latency = sky_ui_unix_time - catchup_unix_time | eval ui_latency = round(ui_latency * 1000,0) | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, sky_ui_timestamp, cim_latency, distributor_latency, ep_latency, catchup_latency, ui_latency, wss_to_sky_latency, distributor_to_sky_latency | dedup sky_id | search portfolio_name = $portfolio$ | where len(CIMsendingTime) > 0
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app fro... See more...
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app from file -> Check 'Upgrade app. Checking this will overwrite the app if it already exists.' 2) Via CLI:  ./splunk install app <app_package_filename> -update 1 -auth <username>:<password> 3) Extract the content of the app.tgz to $SPLUNK_HOME/etc/apps/ (if app already exists, override files) and after that restart splunk service.   Background of my question: I want to implement an automated app update process with ansible for our environment and I want to use the smartest method. Currently, we're using Splunk 9.1.5.   Thank you!   BR dschwarz
That's not something I have come across
ok I will try that , also do u know if we can enable the hyperlinks for field values in a report ??