All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SplunkerNoob, first create a field in your search which contains the URLs, e.g.  ... | eval target_url=case( device_type=="type1", "https://device1.com", device_type=="type2", "https://d... See more...
Hi @SplunkerNoob, first create a field in your search which contains the URLs, e.g.  ... | eval target_url=case( device_type=="type1", "https://device1.com", device_type=="type2", "https://device2.com", device_type=="type3", "https://device3.com", 1=1, "https://default.com" )   Then in your dashboard: <drilldown> <link target="_blank">{{row.target_url}}</link> </drilldown>
Hi @Gauri you can use "|eventstats" instead of "|stats" to keep the data in the pipeline for the later "|stats" command:     | eval totalResponseTime=round(requestTimeinSec*1000) | convert num("r... See more...
Hi @Gauri you can use "|eventstats" instead of "|stats" to keep the data in the pipeline for the later "|stats" command:     | eval totalResponseTime=round(requestTimeinSec*1000) | convert num("requestTimeinSec") | rangemap field="totalResponseTime" "totalResponseTime"=0-3000 | rename range as RangetotalResponseTime | eval totalResponseTimeabv3sec=round(requestTimeinSec*1000) | rangemap field="totalResponseTimeabv3sec" "totalResponseTimeabv3sec"=3001-60000 | rename range as RangetotalResponseTimeabv3sec | eval Product=case( (like(proxyUri,"URI1") AND like(methodName,"POST")) OR (like(proxyUri,"URI2") AND like(methodName,"GET")) OR (like(proxyUri,"URI3") AND like(methodName,"GET")), "ABC") | bin span=5m _time | stats count(totalResponseTime) as TotalTrans by Product URI methodName _time | eventstats sum(eval(RangetotalResponseTime="totalResponseTime")) as TS<3S by Product URI methodName | eventstats sum(eval(RangetotalResponseTimeabv3sec="totalResponseTimeabv3sec")) as TS>3S by Product URI methodName | eval SLI=case(Product="ABC", round('TS<3S'/TotalTrans*100,4)) | rename methodName AS Method | where (Product="ABC") and (SLI<99) | stats sum(TS>3S) as AvgImpact count(URI) as DataOutage by Product URI Method | fields Product URI Method TotalTrans SLI AvgImpact DataOutage | sort Product URI Method    
Hi ALL, After revisiting the installation document, I found it should start the Enterprise Console first. Now the EC is successfully started. But it can not be accessed via browser GUI via "http://<... See more...
Hi ALL, After revisiting the installation document, I found it should start the Enterprise Console first. Now the EC is successfully started. But it can not be accessed via browser GUI via "http://<server-name>:9191" because of permission issue. bin/platform-admin.sh start-platform-admin Starting Enterprise Console Database .... ***** Enterprise Console Database started ***** Starting Enterprise Console application Waiting for the Enterprise Console application to start......... ***** Enterprise Console application started on port 9191 *****
@kaede_oogami  はい、Splunkでソースタイプを設定する際のCHARSET(文字エンコーディング)オプションについて説明いたします。 Shift-JIS関連の文字エンコーディングには確かに複数の選択肢がありますが、主な違いは以下の通りです: 1. SHIFT-JIS: - 標準的なShift-JISエンコーディングです。 - JIS X 0208で定義された文字セットを... See more...
@kaede_oogami  はい、Splunkでソースタイプを設定する際のCHARSET(文字エンコーディング)オプションについて説明いたします。 Shift-JIS関連の文字エンコーディングには確かに複数の選択肢がありますが、主な違いは以下の通りです: 1. SHIFT-JIS: - 標準的なShift-JISエンコーディングです。 - JIS X 0208で定義された文字セットをカバーしています。 2. SJIS: - SHIFT-JISの別名として使われることが多いです。 - 多くの場合、SHIFT-JISと同じ意味で使用されます。 3. MS932: - Microsoftによる拡張Shift-JISエンコーディングです。 - SHIFT-JISを基にしていますが、追加の文字(NEC特殊文字、IBM拡張文字など)をサポートしています。 - Windowsで一般的に使用される日本語エンコーディングです。 4. CP932: - MS932の別名です。「Code Page 932」の略称です。 5. Windows-31J: - MS932のIANA登録名です。 - 技術的にはMS932と同じですが、より正式な名称として使用されることがあります。 実際の使用においては: - 標準的なShift-JIS文書の場合、SHIFT-JISまたはSJISを選択すれば問題ありません。 - Windows環境で作成された文書や、拡張文字を含む可能性がある場合は、MS932やWindows-31Jを選択するのが安全です。 Splunkがこれらの選択肢を提供しているのは、異なるシステムや環境から来るデータに対応するためです。適切なエンコーディングを選択することで、日本語テキストを正確に解析し、インデックスすることができます。 特定のデータソースに対してどのエンコーディングを選択すべきか迷う場合は、データの出所やそれを生成したシステムの特性を考慮して判断するのが良いでしょう。
Hi @MediumToast  If you only specify netfw,index,site1_netfw, it will not apply to all events from sources that are configured to be sent to the netfw index. It will only apply to events with the ex... See more...
Hi @MediumToast  If you only specify netfw,index,site1_netfw, it will not apply to all events from sources that are configured to be sent to the netfw index. It will only apply to events with the exact key netfw. Also, SC4S does not support wildcards in the splunk_metadata.csv file, so each sourcetype must be explicitly defined. If you have multiple Cisco devices (or any other types) that you want to redirect to site1_netfw, you will need to list each one individually.  You could get around this by updating the compliance_meta_by_source.conf and compliance_meta_by_source.csv files, e.g. like this (please test) compliance_meta_by_source.conf:   filter f_netfw_sources { program("cisco_asa" type(string)) or program("cisco_ios" type(string)) or program("cisco_nexus" type(string)) or program("juniper_netscreen" type(string)) # Add other relevant network firewall source types here };   compliance_meta_by_source.csv   f_netfw_sources,.splunk.index,site1_netfw      
Hi @Team,   Could you please help me on looping over inputs in splunk soar. my requirement: I am having input like this , input=['a','b','c','d'] I need to run query on each value from input li... See more...
Hi @Team,   Could you please help me on looping over inputs in splunk soar. my requirement: I am having input like this , input=['a','b','c','d'] I need to run query on each value from input like first it must take 'a' value and run query then from run query result i need to take sys id and pass it to create ticket. Note: we are using 6.1.1(On-prem) Please help me on this    Regards, Harish
Hi @kc_prane , try this - create a new eval field (ServiceGroup) to check whether ServiceName is A or B, else assign it to "Other_Services" : | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s... See more...
Hi @kc_prane , try this - create a new eval field (ServiceGroup) to check whether ServiceName is A or B, else assign it to "Other_Services" : | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s\=\s(?<Trans_Time>\d+)" | eval ServiceGroup = case( ServiceName == "A", "A", ServiceName == "B", "B", 1==1, "Other_Services" ) | stats avg(Trans_Time) as Avg_Trans_Time, count as Count by ServiceGroup | rename ServiceGroup as ServiceName | sort ServiceName  
Hi @Real_captain you can use append to combine the two searches, then get the status using eval if condition: `macro_events_all_win_ops_esa` sourcetype=WinHostMon host=P9TWAEVV01STD (TERM(Esa_Invoic... See more...
Hi @Real_captain you can use append to combine the two searches, then get the status using eval if condition: `macro_events_all_win_ops_esa` sourcetype=WinHostMon host=P9TWAEVV01STD (TERM(Esa_Invoice_Processor) OR TERM(Esa_Final_Demand_Processor) OR TERM(Esa_Initial_Listener_Service) OR TERM(Esa_MT535_Parser) OR TERM(Esa_MT540_Parser) OR TERM(Esa_MT542_Withdrawal_Request) OR TERM(Esa_MT544_Parser) OR TERM(Esa_MT546_Parser) OR TERM(Esa_MT548_Parser) OR TERM(Esa_SCM Batch_Execution) OR TERM(Euroclear_EVIS_Border_Internal) OR TERM(EVISExternalInterface)) | stats latest(State) as Current_Status by service | where Current_Status != "Running" | stats count as count_of_stopped_services | eval status = if(count_of_stopped_services = 0 , "OK" , "NOK" ) | fields status | append [ search `macro_events_all_win_ops_esa` host="P9TWAEVV01STD" sourcetype=WinEventLog "Batch *Failed" System_Exception="*" | stats count as count_of_failed_batches | eval status = if(count_of_failed_batches = 0 , "OK" , "NOK" ) | fields status ] | stats values(status) as status_list | eval final_status = if(mvcount(mvfilter(status_list=="NOK")) > 0, "NOK", "OK") | fields final_status
I Have ServiceNames (A, B ,C ,D, E,  F, G, H)  but want  (C ,D, E,  F, G, H ) ServiceNames combined results and renamed as "Other_Services"  My base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" |... See more...
I Have ServiceNames (A, B ,C ,D, E,  F, G, H)  but want  (C ,D, E,  F, G, H ) ServiceNames combined results and renamed as "Other_Services"  My base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s\=\s(?<Trans_Time>\d+)"   Required Results ServiceName         Trans_Time Count A 60 1111 B 40 1234 Other_Services( C , D, E, F,G,H) 25 1234567
Hello,    I'm new to AppDynamics world. When tried to create a platform after the installation (as the messages attached below) with the following command, and I got an error message next. Can anyon... See more...
Hello,    I'm new to AppDynamics world. When tried to create a platform after the installation (as the messages attached below) with the following command, and I got an error message next. Can anyone advise me how to resolve this issue? Thanks.  -- Jonathan Wang, 2024/07/30 Command ==> [root@appd-server platform-admin]# bin/platform-admin.sh create-platform --name myappd --installation-dir /usr/local/appdynamics/platform2/ IOException while parsing API response: Failed to connect to appd-server/fe80:0:0:0:be24:11ff:fed4:bf11%2:9191 ================== Installation step, and associated log below. ========== I finished AppDynamics installation with the following command (on Rocky Linux 9.4):    ./platform-setup-x64-linux-21.4.4.24619.sh   and got the following complete messages: Installing Enterprise Console Database. Please wait as this may take a few minutes... Installing Enterprise Console Database... Installing Enterprise Console Application. Please wait... Installing Enterprise Console Application... Creating Enterprise Console Application login... Copying timezone scripts to mysql archives... Creating Enterprise Console Application login... Setup has finished installing AppDynamics Enterprise Console on your computer. To install and manage your AppDynamics Platform, use the Enterprise Console CLI from /usr/local/appdynamics/platform2/platform-admin/bin directory. Finishing installation ...
I am also getting same error in Splunk ES 9.1.0.2. Anyone found a solution for this ?   Max retries exceeded with url: /v1/actions/process-check-result (Caused by SSLError("Can't connect to HTTPS U... See more...
I am also getting same error in Splunk ES 9.1.0.2. Anyone found a solution for this ?   Max retries exceeded with url: /v1/actions/process-check-result (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))    
Couldn't you theoretically deploy an app with a scripted input that would make changes to files in etc/system/local? Not saying this is the best method because if it fails you could brick the forward... See more...
Couldn't you theoretically deploy an app with a scripted input that would make changes to files in etc/system/local? Not saying this is the best method because if it fails you could brick the forwarder, but I would think it is theoretically possible if there are no other means possible.
I'm trying to accomplish the same thing. Were you able to come up with a solution?
If I run the below code I am getting events in output json file , if I want to get statistics , is there any api available  if I want to get error count and stdev in json file , how can I use the ... See more...
If I run the below code I am getting events in output json file , if I want to get statistics , is there any api available  if I want to get error count and stdev in json file , how can I use the python code to get these values   payload=f'search index="prod_k8s_onprem_vvvb_nnnn" "k8s.namespace.name"="apl-siii-iiiii" "k8s.container.name"="uuuu-dss-prog" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00")\n' '| addinfo\n' '| bin _time span=5m@m\n' '| stats count(eval(log.level="ERROR")) as error_count by _time\n' '| eventstats stdev(error_count)' print(payload) payload_escaped = f'search={urllib.parse.quote(payload)}' headers = { 'Authorization': f'Bearer {splunk_token}', 'Content-Type': 'application/x-www-form-urlencoded' } url = f'https://{splunk_host}:{splunk_port}/services/search/jobs/export?output_mode=json' response = requests.request("POST", url, headers=headers, data=payload_escaped, verify=False) print(f'{response.status_code=}') txt = response.text if response.status_code==200: json_txt = f'[\n{txt}]' os.makedirs('data', exist_ok=True) with open("data/output_deploy.json", "w") as f: f.write(json_txt) f.close() else: print(txt)  
Why have you got timeSinceLastSeen in the by clause - this was not suggested by @gcusello - what do you get when you do exactly as suggested?
I found the issue described in Symptom 1 of this link https://splunk.my.site.com/customer/s/article/No-Clients-Showing-up-on-Deployment-Server-After-Upgrade-to-9-2-0-1 Resolved!
After upgrading my deployment server to Enterprise 9.2.2 the clients are no longer connecting to the deployment server. When I launch my DS UI and check for clients connecting, it says 0. Has anyone ... See more...
After upgrading my deployment server to Enterprise 9.2.2 the clients are no longer connecting to the deployment server. When I launch my DS UI and check for clients connecting, it says 0. Has anyone had this issue?
It is one of several blocks of lines inside the log file.  Each starts with the little snippet I put above and then has any number of lines after it.  While the file is a .txt, the look to me would b... See more...
It is one of several blocks of lines inside the log file.  Each starts with the little snippet I put above and then has any number of lines after it.  While the file is a .txt, the look to me would be a xml document that pushes out the log file.  I've not seen one like it before.  I was thinking I'd need a props or transform or both to set this date/time, but it's my first experience with it.
Wow.  The developer that created that log needs to be taught how to use Splunk so he can see how awful his creation is. Is that one event or several?  Or is that the prologue to the log file? You m... See more...
Wow.  The developer that created that log needs to be taught how to use Splunk so he can see how awful his creation is. Is that one event or several?  Or is that the prologue to the log file? You may be able to use a custom datetime.xml file or you may want to consider an input script that normalizes the timestamp.
I have tried the below query as per your suggestion, But not getting the result, index=_audit sourcetype=audittrail action=success AND info=succeeded | eval secondsSinceLastSeen=now()-_time | eval ... See more...
I have tried the below query as per your suggestion, But not getting the result, index=_audit sourcetype=audittrail action=success AND info=succeeded | eval secondsSinceLastSeen=now()-_time | eval timeSinceLastSeen=tostring(secondsSinceLastSeen, "duration") | stats count BY user timeSinceLastSeen | append [| rest /services/authentication/users | rename title as user | eval count=0 | fields user ] | stats sum(count) AS total BY user timeSinceLastSeen,