All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@KendallW ,,, Thank for your tips,, But when I search index=card in search app, The result is nothing.. 
Yes, if Windows offer the option to renderXml, using it is better than plain text.  Either way,, you need to parse with search command. As to this event, you do need to use semantics to present such... See more...
Yes, if Windows offer the option to renderXml, using it is better than plain text.  Either way,, you need to parse with search command. As to this event, you do need to use semantics to present such data.  When you say the message is in French, do you mean you have difficulty understanding the language?  If so, seek assistance on that.  This is a security failure during an account login.  The account of significance is Albert.  Maybe set up an extraction after the verb, like   | rex "Compte pour lequel l’ouverture de session .+ : ID de sécurité : (?<securityID>\S+)\s+Nom du compte : (?<accountName>\S+)\s+Domaine du compte : (?<accountDomain>\S+)"   However, if you see two separate events in Splunk when original event is one, there may be a line breaker problem.  Fix that first. (XML can make line breaking more robust.)
Hi @silverKi  To classify logs into multiple indexes based on one sourcetype: props.conf: [test] TRANSFORMS-routing = bankRouting,cardRouting,errorRouting Note: -the plural form TRANSFORMS-routi... See more...
Hi @silverKi  To classify logs into multiple indexes based on one sourcetype: props.conf: [test] TRANSFORMS-routing = bankRouting,cardRouting,errorRouting Note: -the plural form TRANSFORMS-routing instead of TRANSFORM-routing. transforms.conf: [bankRouting] REGEX = (?i)bank DEST_KEY = _MetaData:Index FORMAT = bank [cardRouting] REGEX = (?i)card DEST_KEY = _MetaData:Index FORMAT = card [errorRouting] REGEX = (?i)error DEST_KEY = _MetaData:Index FORMAT = error Note: -Use (?i) for case-insensitive matching -Change DEST_KEY to _MetaData:Index -FORMAT should be the exact index name outputs.conf: [tcpout] defaultGroup = defaultGroup [tcpout:defaultGroup] server = 192.168.111.153:9997 [tcpout-server://192.168.111.151:9997] index = card [tcpout-server://192.168.111.152:9997] index = error      
Hi @kc_prane , what's the difference with your previous question? Anyway, the solution hinted by @KendallW is similar with my previous one. Ciao. Giuseppe
Thank you , Good to hear that we are not the only one What Linux version are you running?  
Hi @kc_prane , you shared only a part of your search, so I cannot check it. anyway, does it solves your requirement? Ciao. Giuseppe
Hi @harishsplunk7 , using my original search, you are checking if the users defined on your Splunk did a login in the last 30 days are present, if not (count=0) they are listed, in other words: the ... See more...
Hi @harishsplunk7 , using my original search, you are checking if the users defined on your Splunk did a login in the last 30 days are present, if not (count=0) they are listed, in other words: the users not logged in Splunk in the last 30 days. Why my search doesn't run for you? the only check that you can performa is if (or when) users did last login, there isn't a not login trace. Adding timeSinceLastSeen tge added list of users isn't considered in the count so you cannot check them. Ciao. Giuseppe
This is my test.log  [07-30-2024 02:19:22] +0900 INFO LMTracker [14307 MainThread] username=fIg-Jvkf, Visa, cardtype=credit, cardnumber=7085-5579-5664-8197, cvc=794, expireday=05/26, user-phone=852-... See more...
This is my test.log  [07-30-2024 02:19:22] +0900 INFO LMTracker [14307 MainThread] username=fIg-Jvkf, Visa, cardtype=credit, cardnumber=7085-5579-5664-8197, cvc=794, expireday=05/26, user-phone=852-9765-3539, comapny=IBK, com-tel=02-885-8485, address=7547 0c2F1YA76CHEkgw Street, city=Seoul, Country=Korea, status=500 Internal Server Error, Server error. Please try again later card.
Currently, my sourcetype contains a mix of bank logs and card logs. I would like to categorize this into `index=bank` and `index=card` respectively. Currently, the search is done with index=main, ... See more...
Currently, my sourcetype contains a mix of bank logs and card logs. I would like to categorize this into `index=bank` and `index=card` respectively. Currently, the search is done with index=main, and all data is displayed. If index=bank, I want only bank-related logs to be output. We set the forwarder as follows and created bank, card, and error indexes on the server that will receive the data. This is the code I have written so far... I need help,,,,,   splunk@heavy-forwarder:/opt/splunk/etc/apps/search/local:> cat inputs.conf [monitor:///opt/splunk/var/log/splunk/test.log] disabled = false host = heavy-forwarder sourcetype = test crcSalt = <SOURCE>   splunk@heavy-forwarder:/opt/splunk/etc/system/local:> cat props.conf [test] TRANSFORM-routing=bankRouting,cardRouting,errorRouting splunk@heavy-forwarder:/opt/splunk/etc/system/local:> cat transform.conf [bankRouting] REGEX=bank DEST_KEY =_INDEX FORMAT = bankGroup [cardRouting] REGEX=card DEST_KEY =_INDEX FORMAT = cardGroup [errorGroup] REGEX=error DEST_KEY =_INDEX FORMAT = errorGroup splunk@heavy-forwarder:/opt/splunk/etc/system/local:> cat outputs.conf [tcpout:bankGroup] server = 192.168.111.153:9997 [tcpout:cardGroup] server = 192.168.111.151:9997 [tcpout:errorGroup] server = 192.168.111.152:9997  
Or with JSON: { "type": "splunk.table", "dataSources": { "primary": "ds_5ds4f5" }, "title": "Device Inventory", "eventHandlers": [ { "type": "drilldow... See more...
Or with JSON: { "type": "splunk.table", "dataSources": { "primary": "ds_5ds4f5" }, "title": "Device Inventory", "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "{{row.target_url}}", "newTab": true } } ] }
Hi @SplunkerNoob, first create a field in your search which contains the URLs, e.g.  ... | eval target_url=case( device_type=="type1", "https://device1.com", device_type=="type2", "https://d... See more...
Hi @SplunkerNoob, first create a field in your search which contains the URLs, e.g.  ... | eval target_url=case( device_type=="type1", "https://device1.com", device_type=="type2", "https://device2.com", device_type=="type3", "https://device3.com", 1=1, "https://default.com" )   Then in your dashboard: <drilldown> <link target="_blank">{{row.target_url}}</link> </drilldown>
Hi @Gauri you can use "|eventstats" instead of "|stats" to keep the data in the pipeline for the later "|stats" command:     | eval totalResponseTime=round(requestTimeinSec*1000) | convert num("r... See more...
Hi @Gauri you can use "|eventstats" instead of "|stats" to keep the data in the pipeline for the later "|stats" command:     | eval totalResponseTime=round(requestTimeinSec*1000) | convert num("requestTimeinSec") | rangemap field="totalResponseTime" "totalResponseTime"=0-3000 | rename range as RangetotalResponseTime | eval totalResponseTimeabv3sec=round(requestTimeinSec*1000) | rangemap field="totalResponseTimeabv3sec" "totalResponseTimeabv3sec"=3001-60000 | rename range as RangetotalResponseTimeabv3sec | eval Product=case( (like(proxyUri,"URI1") AND like(methodName,"POST")) OR (like(proxyUri,"URI2") AND like(methodName,"GET")) OR (like(proxyUri,"URI3") AND like(methodName,"GET")), "ABC") | bin span=5m _time | stats count(totalResponseTime) as TotalTrans by Product URI methodName _time | eventstats sum(eval(RangetotalResponseTime="totalResponseTime")) as TS<3S by Product URI methodName | eventstats sum(eval(RangetotalResponseTimeabv3sec="totalResponseTimeabv3sec")) as TS>3S by Product URI methodName | eval SLI=case(Product="ABC", round('TS<3S'/TotalTrans*100,4)) | rename methodName AS Method | where (Product="ABC") and (SLI<99) | stats sum(TS>3S) as AvgImpact count(URI) as DataOutage by Product URI Method | fields Product URI Method TotalTrans SLI AvgImpact DataOutage | sort Product URI Method    
Hi ALL, After revisiting the installation document, I found it should start the Enterprise Console first. Now the EC is successfully started. But it can not be accessed via browser GUI via "http://<... See more...
Hi ALL, After revisiting the installation document, I found it should start the Enterprise Console first. Now the EC is successfully started. But it can not be accessed via browser GUI via "http://<server-name>:9191" because of permission issue. bin/platform-admin.sh start-platform-admin Starting Enterprise Console Database .... ***** Enterprise Console Database started ***** Starting Enterprise Console application Waiting for the Enterprise Console application to start......... ***** Enterprise Console application started on port 9191 *****
@kaede_oogami  はい、Splunkでソースタイプを設定する際のCHARSET(文字エンコーディング)オプションについて説明いたします。 Shift-JIS関連の文字エンコーディングには確かに複数の選択肢がありますが、主な違いは以下の通りです: 1. SHIFT-JIS: - 標準的なShift-JISエンコーディングです。 - JIS X 0208で定義された文字セットを... See more...
@kaede_oogami  はい、Splunkでソースタイプを設定する際のCHARSET(文字エンコーディング)オプションについて説明いたします。 Shift-JIS関連の文字エンコーディングには確かに複数の選択肢がありますが、主な違いは以下の通りです: 1. SHIFT-JIS: - 標準的なShift-JISエンコーディングです。 - JIS X 0208で定義された文字セットをカバーしています。 2. SJIS: - SHIFT-JISの別名として使われることが多いです。 - 多くの場合、SHIFT-JISと同じ意味で使用されます。 3. MS932: - Microsoftによる拡張Shift-JISエンコーディングです。 - SHIFT-JISを基にしていますが、追加の文字(NEC特殊文字、IBM拡張文字など)をサポートしています。 - Windowsで一般的に使用される日本語エンコーディングです。 4. CP932: - MS932の別名です。「Code Page 932」の略称です。 5. Windows-31J: - MS932のIANA登録名です。 - 技術的にはMS932と同じですが、より正式な名称として使用されることがあります。 実際の使用においては: - 標準的なShift-JIS文書の場合、SHIFT-JISまたはSJISを選択すれば問題ありません。 - Windows環境で作成された文書や、拡張文字を含む可能性がある場合は、MS932やWindows-31Jを選択するのが安全です。 Splunkがこれらの選択肢を提供しているのは、異なるシステムや環境から来るデータに対応するためです。適切なエンコーディングを選択することで、日本語テキストを正確に解析し、インデックスすることができます。 特定のデータソースに対してどのエンコーディングを選択すべきか迷う場合は、データの出所やそれを生成したシステムの特性を考慮して判断するのが良いでしょう。
Hi @MediumToast  If you only specify netfw,index,site1_netfw, it will not apply to all events from sources that are configured to be sent to the netfw index. It will only apply to events with the ex... See more...
Hi @MediumToast  If you only specify netfw,index,site1_netfw, it will not apply to all events from sources that are configured to be sent to the netfw index. It will only apply to events with the exact key netfw. Also, SC4S does not support wildcards in the splunk_metadata.csv file, so each sourcetype must be explicitly defined. If you have multiple Cisco devices (or any other types) that you want to redirect to site1_netfw, you will need to list each one individually.  You could get around this by updating the compliance_meta_by_source.conf and compliance_meta_by_source.csv files, e.g. like this (please test) compliance_meta_by_source.conf:   filter f_netfw_sources { program("cisco_asa" type(string)) or program("cisco_ios" type(string)) or program("cisco_nexus" type(string)) or program("juniper_netscreen" type(string)) # Add other relevant network firewall source types here };   compliance_meta_by_source.csv   f_netfw_sources,.splunk.index,site1_netfw      
Hi @Team,   Could you please help me on looping over inputs in splunk soar. my requirement: I am having input like this , input=['a','b','c','d'] I need to run query on each value from input li... See more...
Hi @Team,   Could you please help me on looping over inputs in splunk soar. my requirement: I am having input like this , input=['a','b','c','d'] I need to run query on each value from input like first it must take 'a' value and run query then from run query result i need to take sys id and pass it to create ticket. Note: we are using 6.1.1(On-prem) Please help me on this    Regards, Harish
Hi @kc_prane , try this - create a new eval field (ServiceGroup) to check whether ServiceName is A or B, else assign it to "Other_Services" : | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s... See more...
Hi @kc_prane , try this - create a new eval field (ServiceGroup) to check whether ServiceName is A or B, else assign it to "Other_Services" : | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s\=\s(?<Trans_Time>\d+)" | eval ServiceGroup = case( ServiceName == "A", "A", ServiceName == "B", "B", 1==1, "Other_Services" ) | stats avg(Trans_Time) as Avg_Trans_Time, count as Count by ServiceGroup | rename ServiceGroup as ServiceName | sort ServiceName  
Hi @Real_captain you can use append to combine the two searches, then get the status using eval if condition: `macro_events_all_win_ops_esa` sourcetype=WinHostMon host=P9TWAEVV01STD (TERM(Esa_Invoic... See more...
Hi @Real_captain you can use append to combine the two searches, then get the status using eval if condition: `macro_events_all_win_ops_esa` sourcetype=WinHostMon host=P9TWAEVV01STD (TERM(Esa_Invoice_Processor) OR TERM(Esa_Final_Demand_Processor) OR TERM(Esa_Initial_Listener_Service) OR TERM(Esa_MT535_Parser) OR TERM(Esa_MT540_Parser) OR TERM(Esa_MT542_Withdrawal_Request) OR TERM(Esa_MT544_Parser) OR TERM(Esa_MT546_Parser) OR TERM(Esa_MT548_Parser) OR TERM(Esa_SCM Batch_Execution) OR TERM(Euroclear_EVIS_Border_Internal) OR TERM(EVISExternalInterface)) | stats latest(State) as Current_Status by service | where Current_Status != "Running" | stats count as count_of_stopped_services | eval status = if(count_of_stopped_services = 0 , "OK" , "NOK" ) | fields status | append [ search `macro_events_all_win_ops_esa` host="P9TWAEVV01STD" sourcetype=WinEventLog "Batch *Failed" System_Exception="*" | stats count as count_of_failed_batches | eval status = if(count_of_failed_batches = 0 , "OK" , "NOK" ) | fields status ] | stats values(status) as status_list | eval final_status = if(mvcount(mvfilter(status_list=="NOK")) > 0, "NOK", "OK") | fields final_status
I Have ServiceNames (A, B ,C ,D, E,  F, G, H)  but want  (C ,D, E,  F, G, H ) ServiceNames combined results and renamed as "Other_Services"  My base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" |... See more...
I Have ServiceNames (A, B ,C ,D, E,  F, G, H)  but want  (C ,D, E,  F, G, H ) ServiceNames combined results and renamed as "Other_Services"  My base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s\=\s(?<Trans_Time>\d+)"   Required Results ServiceName         Trans_Time Count A 60 1111 B 40 1234 Other_Services( C , D, E, F,G,H) 25 1234567
Hello,    I'm new to AppDynamics world. When tried to create a platform after the installation (as the messages attached below) with the following command, and I got an error message next. Can anyon... See more...
Hello,    I'm new to AppDynamics world. When tried to create a platform after the installation (as the messages attached below) with the following command, and I got an error message next. Can anyone advise me how to resolve this issue? Thanks.  -- Jonathan Wang, 2024/07/30 Command ==> [root@appd-server platform-admin]# bin/platform-admin.sh create-platform --name myappd --installation-dir /usr/local/appdynamics/platform2/ IOException while parsing API response: Failed to connect to appd-server/fe80:0:0:0:be24:11ff:fed4:bf11%2:9191 ================== Installation step, and associated log below. ========== I finished AppDynamics installation with the following command (on Rocky Linux 9.4):    ./platform-setup-x64-linux-21.4.4.24619.sh   and got the following complete messages: Installing Enterprise Console Database. Please wait as this may take a few minutes... Installing Enterprise Console Database... Installing Enterprise Console Application. Please wait... Installing Enterprise Console Application... Creating Enterprise Console Application login... Copying timezone scripts to mysql archives... Creating Enterprise Console Application login... Setup has finished installing AppDynamics Enterprise Console on your computer. To install and manage your AppDynamics Platform, use the Enterprise Console CLI from /usr/local/appdynamics/platform2/platform-admin/bin directory. Finishing installation ...