All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Usually, instead of using join, you can replace it by stats and will be a lot better in performance. Try to do something like this and adjust it to your needs: index=INDEXA OR index=INDEXA | stats ... See more...
Usually, instead of using join, you can replace it by stats and will be a lot better in performance. Try to do something like this and adjust it to your needs: index=INDEXA OR index=INDEXA | stats values(fieldB) AS fieldB values(fieldC) AS fieldC values(fieldX) AS fieldX values(fieldY) AS fieldY values(fieldZ) AS fieldZ by fieldA | fillnull value=unknown fieldZ | stats count(fieldB) AS fieldB count(fieldC) AS fieldC count(fieldX) AS fieldX count(fieldY) AS fieldY by fieldA, fieldZ First use OR to merge the info from both indexes and use stats to group the other fields by fieldA. Then, assuming there will be gaps of information in some fiels, usa can use fillnull to fill those gaps. Then, count all fields by fieldA and fieldZ.   Also check this post: https://community.splunk.com/t5/Splunk-Search/Replace-join-with-stats-to-merge-events-based-on-common-field/m-p/321060  
Better late than never: Assuming your windows host monitoring polls the hosts at regular intervals and logs a success or failure, and if you want a simple line chart with values 1 for up and 0 for... See more...
Better late than never: Assuming your windows host monitoring polls the hosts at regular intervals and logs a success or failure, and if you want a simple line chart with values 1 for up and 0 for down in some interval (say 10 minutes), you could do this: sourcetype=WinHostMon |  eval status_num=if(Status="up",1,0) |  timechart span=10m min(status_num) by Host
@gcusello : Thanks for your response. Story in short, I want to map certificate details from one of the sources to fields in certificate datamodel.  https://docs.splunk.com/Documentation/CIM/5.3.2... See more...
@gcusello : Thanks for your response. Story in short, I want to map certificate details from one of the sources to fields in certificate datamodel.  https://docs.splunk.com/Documentation/CIM/5.3.2/User/Certificates. This is my requirment. I have mapped two fields using FIELDALIAS - ssl_issuer and ssl_end_time. Now I want to map TagData.Email to ssl_issuer_email. I am using these fields further. Regards, PNV
Let say I have 2 lookup files , lookup1  has 50 values and other have 150 values so when I inner join  lookup1 to lookup 2 it gives me low results but when i reverse it results change and are higher.
below is my scenario described by Oracle DBA I have two indexes INDEXA fieldA fieldB fieldC INDEXB fieldA fieldX fieldY fieldZ First I need to join them both, it will be kind of LEFT JO... See more...
below is my scenario described by Oracle DBA I have two indexes INDEXA fieldA fieldB fieldC INDEXB fieldA fieldX fieldY fieldZ First I need to join them both, it will be kind of LEFT JOIN as you porbably noticed by fieldA. Then group it by filedA+FieldZ and count each group.   In DBA language something like : select a.fieldA, b.filedZ, count(*) from indexA A left join indexB B on a.fieldA=b.fieldA group by a.fieldA, b.filedZ   any hints ?   K.  
Hi @MartyJ , as I described, you can use a text input instead an html, it's easier. Ciao. Giuseppe
Hi @ashwinve1385 , using Victoria experience you can access (only by GUI) only SHs and not IDXs. Ciao. Giuseppe
Hi @Poojitha , the first question is why? create fields at index time gives additional load to the indexers during indexing, this is possibe if you haven't a big volume of data. anyway you have to... See more...
Hi @Poojitha , the first question is why? create fields at index time gives additional load to the indexers during indexing, this is possibe if you haven't a big volume of data. anyway you have to use the way to create fields at index time descripted at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/Configureindex-timefieldextraction  an ingestions eval then you have to use an ingest eval action descripted at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/IngestEval in props.conf [your_sourcetype] TRANSFORMS-eval1 =eval1 in transforms: [eval1] INGEST_EVAL = field3=json_extract(email,Tagdata{}.Email) (please check the path of your json field in fields.conf [username] INDEXED=true  Ciao. Giuseppe
Hi All,     TagData [ [-] { [-] Key: Application Value: Test_App } { [-] Key: Email Value: test@abc.com } ]     I have nested json data as ... See more...
Hi All,     TagData [ [-] { [-] Key: Application Value: Test_App } { [-] Key: Email Value: test@abc.com } ]     I have nested json data as above. I want to extract Email field value and map it to new field - owner_email . This need to be done during indexing time. With normal splunk search , I am getting way : index=*_test sourcetype="test:sourcetype" source="*:test" | array2object path="TagData" key="Key" value="Value" | rename "TagData.Email" as owner_email Please help me how to achieve this during indexing time. How do I update props.conf file ? Regards, PNV
Thank @KendallW , that is good reference to CRUD with KOs in the Splunk but the script is out of date, some function did not work anymore.  And he used REST API instead of SDK.  I am asking about th... See more...
Thank @KendallW , that is good reference to CRUD with KOs in the Splunk but the script is out of date, some function did not work anymore.  And he used REST API instead of SDK.  I am asking about the Splunk SDK.
Hi @tatdat171 Check out this script by @harsmarvania57  https://github.com/harsmarvania57/splunk-ko-change 
I don't think you are looking to join two searches because the two searches operate on the same data source and in the same time interval.  What you want is to connect a transaction request and an OR... See more...
I don't think you are looking to join two searches because the two searches operate on the same data source and in the same time interval.  What you want is to connect a transaction request and an ORA-00001 error if it happens.  ORA-00001, if it happens, should be directly following that transaction request. (Otherwise your problem is unsolvable.)  The _raw field in your last output actually represents the error message you want to display, not so much raw events. In other words, from the data you illustrated, you want something like _time transaction_id error_log 2024-06-14 04:35:50 48493009394940303 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription: (Important: When you say "output as", you should illustrate actual output (anonymize as needed) of a search, not just field names.) This should get what you wanted:   index=test_index source=/test/instance ("<=== Recv'd TRN:" OR "ORA-00001") | rex field=_raw "\<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | transaction startswith="<=== Recv'd TRN:" endswith="ORA-00001" maxevents=2 | fields _* transaction_id | eval error_log = split(_raw, " ") | mvexpand error_log | where match(error_log, "ORA-00001") | table _time transaction_id error_log   For this type of problem, transaction is appropriate. Here is an emulation for you to play with and compare with real data:   | makeresults | eval data = mvappend("240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: 48493009394940303 (TQ_HOST -> TQ_HOST)", "240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure.", "240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription:") | mvexpand data | rename data AS _raw | rex "^(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%y%m%d %T") | sort - _time ``` the above emulates index=test_index source=/test/instance ("<=== Recv'd TRN:" OR "ORA-00001") ```  
Hi everyone, I am writing the Python script to move the knowledge object to another app. For example, with the Lookup file: I can do it with the REST API endpoint  servicesNS/<user>/<app_name>/dat... See more...
Hi everyone, I am writing the Python script to move the knowledge object to another app. For example, with the Lookup file: I can do it with the REST API endpoint  servicesNS/<user>/<app_name>/data/lookup-table-files/<lookup_file_name/move   But I can't find the function to do that with the Splunk Python SDK. https://docs.splunk.com/DocumentationStatic/PythonSDK/1.7.4/client.html#splunklib.client.Configurations  If you have experience with Splunk SDK, please share it with me. Thank!
I think the OP's test_MID_IP.csv contains test_IP, not IP. (Although it doesn't need to be.)  It doesn't need count but may (or may not) need MID.  Also,  the append option is needed for the table pr... See more...
I think the OP's test_MID_IP.csv contains test_IP, not IP. (Although it doesn't need to be.)  It doesn't need count but may (or may not) need MID.  Also,  the append option is needed for the table preserve all data. index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup test_MID_IP.csv test_IP as IP OUTPUT test_IP | where isnull(test_IP) | dedup IP | rename IP as test_IP | fields test_IP MID ``` omit MID if that's not needed ``` | outputlookup append=true test_MID_IP.csv  
Thank you for your commentary. I appriciate it.
Hi @ianthomas, Looks like that is a bug - I've never noticed it because my computer and Splunk time zones have always been in sync. The issue comes because the "now" time is generated via JavaScr... See more...
Hi @ianthomas, Looks like that is a bug - I've never noticed it because my computer and Splunk time zones have always been in sync. The issue comes because the "now" time is generated via JavaScript, and so it obeys the time zone of the local PC. I can make a tweak to ensure that the time zone set by Splunk is used to generate "now".   I'll let you know when I've posted an update to the app on Splunkbase. Cheers. Daniel  
Hi @heskez try using the join command: <left-dataset> | join left=L right=R where L.pid = R.pid <right-dataset>  https://docs.splunk.com/Documentation/SCS/current/SearchReference/JoinCommandOvervi... See more...
Hi @heskez try using the join command: <left-dataset> | join left=L right=R where L.pid = R.pid <right-dataset>  https://docs.splunk.com/Documentation/SCS/current/SearchReference/JoinCommandOverview 
@kmjefferson42 if that dashboard was made with Splunk 7, then you probably would have to add version="1.1" to the <form> header. I don't think there's anything problematic with copy/pasting the old d... See more...
@kmjefferson42 if that dashboard was made with Splunk 7, then you probably would have to add version="1.1" to the <form> header. I don't think there's anything problematic with copy/pasting the old dashboard XML to a new dashboard
Try something like this:   index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=in... See more...
Try something like this:   index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" |sort - "CPU %"|head 10 | lookup bd_users_hierarchy.csv user OUTPUT user, email as user_email, UserName, Director, VP,Director_email,VP_email | join left=L right=R where L.user=R.app_id [index = imdc_ops_13m sourcetype = usecase_contact app_id="*" | dedup app_id | table _time app_id app_owner app_team_dl]    
hey @tscroggins thanks for replying. Is there anything that you know about this particular error? That error code:(_ssl. c:1106) is there a splunk guide for these errors?