All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks everyone I will go through the upgrade route then.  It will be safer that way.
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it... See more...
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it seems they expect data from EDR solutions like CrowdStrike or Symantec, rather than local Linux audit logs. Does this mean there is no way to use the out-of-the-box use cases created in Security Essentials/Enterprise Security for Linux logs?   Thanks
This is fairly common and easy to correct. Stop Splunk Change the ownership of all Splunk files using the command (as root) chown -r splunk:splunk /opt/splunk Start Splunk
Did you restart Splunk after changing $JAVA_HOME? The info is stored in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/dbx_settings.conf,  See https://docs.splunk.com/Documentation/DBX/3.18.0/Dep... See more...
Did you restart Splunk after changing $JAVA_HOME? The info is stored in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/dbx_settings.conf,  See https://docs.splunk.com/Documentation/DBX/3.18.0/DeployDBX/settingsconfspec for a description of the contents.
Hi Jorob I saw this option as well. But what if we don't want to run the Splunk daemon in /etc/init.d? I mean, the problem should be well known by Splunk and since allmost a year we don't hear any ... See more...
Hi Jorob I saw this option as well. But what if we don't want to run the Splunk daemon in /etc/init.d? I mean, the problem should be well known by Splunk and since allmost a year we don't hear any improvements from them. I'm a little disappointed on Splunk's part that they don't describe a workaround in the docs or even look for the solution. It looks like nobody at Splunk cares about this problem. As I mentioned, I think it's a bad idea to have to install all universal forwarders in the “old” way just because Splunk Stream can't handle it. We are all eagerly awaiting Splunk's response. Greetings  
I am getting the same thing when trying to use Splunk Windows and AWS addons. I just installed them and when they try to load the page the page shows for a quick second and than see that error message
There is no guarantee that the first event for each sky_id has a value in catchup_updated_time, so the filldown can be pulling any value from the previous sky_id down. When the dedup is done, only th... See more...
There is no guarantee that the first event for each sky_id has a value in catchup_updated_time, so the filldown can be pulling any value from the previous sky_id down. When the dedup is done, only the first event for each sky_id is kept (which could have the wrong catchup_updated_time. Try either | sort -sky_id catchup_updated_time | filldown catchup_updated_time, sky_ui_timestamp or | sort -sky_id | eventstats values(catchup_updated_time) as catchup_updated_time, values(sky_ui_timestamp) as sky_ui_timestamp by sky_id
@N_K I would recommend that you make the input playbook capable of handling list items as inputs and doing the iteration inside the playbook as it will be the path of least resistance and put less st... See more...
@N_K I would recommend that you make the input playbook capable of handling list items as inputs and doing the iteration inside the playbook as it will be the path of least resistance and put less strain on the platform from a worker perspective. 
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and c... See more...
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and call the playbook for each item in that list, this requires using the phantom.playbook function. From what I can see, there is no way to retrieve the output of this playbook now, is that correct?   Example below: for item in prepare_data__post_list: phantom.playbook(playbook="local/__Post_To_Server", container={"id": int(container_id)}, inputs={"body": item, "headers": prepare_data__headers, "path": prepare_data__path})
index=abc sourcetype=abc_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<... See more...
index=abc sourcetype=abc_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<distributor_timestamp>\X+)\", sky_to_mq" | rex field=_raw "distributor_latency=\"(?<distributor_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sky sourcetype=Sky_WSS_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval ep_timestamp = strftime(strptime(ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y-%m-%d %H:%M:%S.%3N")] | join type=left sky_id [ search index=sky "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=distributor_timestamp "(?<distributor_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sky_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sky_id, event_id, booking_timestamp, distributor_timestamp, ep_timestamp, mx_status, operation, action, distributor_latency, ep_latency, portfolio_name, portfolio_entity | join type=left sky_id [ search index=sky sourcetype=sky_cashfx_catchup_logs "[WSS] - Trade Store has been updated" | rex field=_raw "Max Skylib TradeID: (?<sky_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S,%3N"), "%Y/%m/%d %H:%M:%S.%3N") | dedup sky_id sortby +_time | table sky_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") ```| eval wss_to_sky_latency = catchup_unix_time - booking_timestamp_unix``` | eval mq_to_sky_update_latency = catchup_unix_time - distributor_timestamp_unix | eval ep_timestamp = strftime(strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%3N"), "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp = strftime(strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N"), "%Y/%m/%d %H:%M:%S.%4N") | table trade_id, portfolio_name, portfolio_entity, sky_id, event_id, booking_timestamp, booking_timestamp_unix, distributor_timestamp, distributor_timestamp_unix, ep_timestamp, distributor_latency, ep_latency, catchup_updated_time, wss_to_sky_latency, mq_to_sky_update_latency, mx_status, operation, action, catchup_unix_time | rex field=trade_id "^\w+ (?<dealnumber>\d+)$" | join type=left dealnumber [ search index=wss "Sending message" source="/proj/flowfx/wss/FFXWS01P/log/MQ1.log" ```Exclude Far Legs of Swap Trades for first Iteration of Dash``` NOT "<swap_leg>2</swap_leg>" ```Exclude Cancels, Amends, Auxiliaries, Allocations, Blocks - allocated ``` NOT "<status>" ```Exclude MM Deals ``` NOT "<WSSMMTRADE>" | rex "\<transaction\>(?P<tid>.*?)\<\/transaction\>" | rex "\<deal_number\>(?P<dealnumber>.*?)\<\/deal_number\>" | rex "\<external_deal\>(?P<sourcesystemid>.*?)\<\/external_deal\>" | rex "\<cust_type\>(?P<custType>.*?)\<\/cust_type\>" | eval region=case(host == "pffxa01z", "Global", host == "pffxa02z", "China") | eval wssSendingTime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-processor.log" "INFO SLA FFX-Processor received" | rex "transactionId\=(?P<tid>.*?)\," | eval flowfxincomingtime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table flowfxincomingtime,tid, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime | eval wssSendingTimeUnix=strptime(wssSendingTime,"%Y-%m-%d %H:%M:%S.%Q") | eval flowfxincomingtimeUnix=strptime(flowfxincomingtime,"%Y-%m-%d %H:%M:%S.%Q") | eval timebetweenWssFlowfx = flowfxincomingtimeUnix - wssSendingTimeUnix | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-trade-sender-cim.log" "INFO SLA FFX-Trade-Sender sent" | rex "nearTransactionId\=(?P<tid>.*?)\," | eval CIMsendingTime=strftime(_time,"%Y/%m/%d %H:%M:%S.%Q") | eval MQ_available_time=strftime(_time - 7200, "%Y-%m-%d %H:%M:%S.%Q") | table CIMsendingTime,tid,MQ_available_time,booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix,CIMsendingTime, MQ_available_time, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, cim_latency, distributor_latency, ep_latency, catchup_latency, wss_to_sky_latency, distributor_to_sky_latency |sort - sky_id | join type=left sky_id [ search index=sky sourcetype=sky_webservices_logs source="D:\\SkyNet\\SkyWebService\\logs\\live-risk-stomp-broadcast.log" "maxskyid" | where maxskyid > 0 | dedup maxskyid | rename maxskyid as sky_id | eval sky_ui_timestamp=strftime(_time, "%Y/%m/%d %H:%M:%S.%3N") | table sky_id host sky_ui_timestamp ] | sort -sky_id | filldown catchup_updated_time, sky_ui_timestamp | eval mq_to_sky_update_latency = round(mq_to_sky_update_latency * 1000,0) | eval sky_ui_unix_time = strptime(sky_ui_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S.%3N") | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval wss_to_sky_latency = sky_ui_unix_time - booking_timestamp_unix | eval wss_to_sky_latency = round(wss_to_sky_latency * 1000,0) | eval CIMsendingTime_unix = strptime(CIMsendingTime, "%Y/%m/%d %H:%M:%S.%3Q") | eval distributor_to_sky_latency = sky_ui_unix_time - CIMsendingTime_unix | eval distributor_to_sky_latency = round(distributor_to_sky_latency * 1000,0) | eval cim_latency = CIMsendingTime_unix - booking_timestamp_unix | eval cim_latency = round(cim_latency * 1000,0) | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_latency = distributor_timestamp_unix - CIMsendingTime_unix | eval distributor_latency = round(distributor_latency * 1000,0) | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | eval ep_latency = round(ep_latency * 1000,0) | eval catchup_latency = catchup_unix_time - ep_timestamp_unix | eval catchup_latency = round(catchup_latency * 1000,0) | eval ui_latency = sky_ui_unix_time - catchup_unix_time | eval ui_latency = round(ui_latency * 1000,0) | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, sky_ui_timestamp, cim_latency, distributor_latency, ep_latency, catchup_latency, ui_latency, wss_to_sky_latency, distributor_to_sky_latency | dedup sky_id | search portfolio_name = $portfolio$ | where len(CIMsendingTime) > 0
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app fro... See more...
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app from file -> Check 'Upgrade app. Checking this will overwrite the app if it already exists.' 2) Via CLI:  ./splunk install app <app_package_filename> -update 1 -auth <username>:<password> 3) Extract the content of the app.tgz to $SPLUNK_HOME/etc/apps/ (if app already exists, override files) and after that restart splunk service.   Background of my question: I want to implement an automated app update process with ansible for our environment and I want to use the smartest method. Currently, we're using Splunk 9.1.5.   Thank you!   BR dschwarz
That's not something I have come across
ok I will try that , also do u know if we can enable the hyperlinks for field values in a report ??
Please share your search / SPL, preferably in a code block, not a picture
I think this is a feature of gmail which slack doesn't have i.e. gmail recognises the text as being a website address and converts it to a link, slack doesn't do this. Try sending a text to a gmail a... See more...
I think this is a feature of gmail which slack doesn't have i.e. gmail recognises the text as being a website address and converts it to a link, slack doesn't do this. Try sending a text to a gmail account with just a text address of a website and see if gmail converts it to a link.
FYI this is happening quite randomly it fills it with wrong values (not the value above it) but its quite random, sometimes it working sometimes not 
Noted thanks @ITWhisperer  However looks like its not working as expected This is before filldown   This is after filldown Why is it not populating 2024/09/04 07:54:20.445 from the row... See more...
Noted thanks @ITWhisperer  However looks like its not working as expected This is before filldown   This is after filldown Why is it not populating 2024/09/04 07:54:20.445 from the rows below instead it is filling with 2024/09/04 07:54:52.137  
KPIのみを表示するサービスアナライザーを作成したいのですが、作成することは可能ですか?可能であれば手順を知りたいです。
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, ... See more...
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, and now i'm confused.  Someone help me please
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in t... See more...
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in the index. I have tried to tcpdump and I can see the logs arriving at my Splunk instance. below I attach the syslog configuration and tcpdump result on my splunk instance. What could be the cause of this issue, and what steps should I take to troubleshoot it? Thanks for any insights!