All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @MartyJ , as I described, you can use a text input instead an html, it's easier. Ciao. Giuseppe
Hi @ashwinve1385 , using Victoria experience you can access (only by GUI) only SHs and not IDXs. Ciao. Giuseppe
Hi @Poojitha , the first question is why? create fields at index time gives additional load to the indexers during indexing, this is possibe if you haven't a big volume of data. anyway you have to... See more...
Hi @Poojitha , the first question is why? create fields at index time gives additional load to the indexers during indexing, this is possibe if you haven't a big volume of data. anyway you have to use the way to create fields at index time descripted at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/Configureindex-timefieldextraction  an ingestions eval then you have to use an ingest eval action descripted at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/IngestEval in props.conf [your_sourcetype] TRANSFORMS-eval1 =eval1 in transforms: [eval1] INGEST_EVAL = field3=json_extract(email,Tagdata{}.Email) (please check the path of your json field in fields.conf [username] INDEXED=true  Ciao. Giuseppe
Hi All,     TagData [ [-] { [-] Key: Application Value: Test_App } { [-] Key: Email Value: test@abc.com } ]     I have nested json data as ... See more...
Hi All,     TagData [ [-] { [-] Key: Application Value: Test_App } { [-] Key: Email Value: test@abc.com } ]     I have nested json data as above. I want to extract Email field value and map it to new field - owner_email . This need to be done during indexing time. With normal splunk search , I am getting way : index=*_test sourcetype="test:sourcetype" source="*:test" | array2object path="TagData" key="Key" value="Value" | rename "TagData.Email" as owner_email Please help me how to achieve this during indexing time. How do I update props.conf file ? Regards, PNV
Thank @KendallW , that is good reference to CRUD with KOs in the Splunk but the script is out of date, some function did not work anymore.  And he used REST API instead of SDK.  I am asking about th... See more...
Thank @KendallW , that is good reference to CRUD with KOs in the Splunk but the script is out of date, some function did not work anymore.  And he used REST API instead of SDK.  I am asking about the Splunk SDK.
Hi @tatdat171 Check out this script by @harsmarvania57  https://github.com/harsmarvania57/splunk-ko-change 
I don't think you are looking to join two searches because the two searches operate on the same data source and in the same time interval.  What you want is to connect a transaction request and an OR... See more...
I don't think you are looking to join two searches because the two searches operate on the same data source and in the same time interval.  What you want is to connect a transaction request and an ORA-00001 error if it happens.  ORA-00001, if it happens, should be directly following that transaction request. (Otherwise your problem is unsolvable.)  The _raw field in your last output actually represents the error message you want to display, not so much raw events. In other words, from the data you illustrated, you want something like _time transaction_id error_log 2024-06-14 04:35:50 48493009394940303 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription: (Important: When you say "output as", you should illustrate actual output (anonymize as needed) of a search, not just field names.) This should get what you wanted:   index=test_index source=/test/instance ("<=== Recv'd TRN:" OR "ORA-00001") | rex field=_raw "\<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | transaction startswith="<=== Recv'd TRN:" endswith="ORA-00001" maxevents=2 | fields _* transaction_id | eval error_log = split(_raw, " ") | mvexpand error_log | where match(error_log, "ORA-00001") | table _time transaction_id error_log   For this type of problem, transaction is appropriate. Here is an emulation for you to play with and compare with real data:   | makeresults | eval data = mvappend("240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: 48493009394940303 (TQ_HOST -> TQ_HOST)", "240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure.", "240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription:") | mvexpand data | rename data AS _raw | rex "^(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%y%m%d %T") | sort - _time ``` the above emulates index=test_index source=/test/instance ("<=== Recv'd TRN:" OR "ORA-00001") ```  
Hi everyone, I am writing the Python script to move the knowledge object to another app. For example, with the Lookup file: I can do it with the REST API endpoint  servicesNS/<user>/<app_name>/dat... See more...
Hi everyone, I am writing the Python script to move the knowledge object to another app. For example, with the Lookup file: I can do it with the REST API endpoint  servicesNS/<user>/<app_name>/data/lookup-table-files/<lookup_file_name/move   But I can't find the function to do that with the Splunk Python SDK. https://docs.splunk.com/DocumentationStatic/PythonSDK/1.7.4/client.html#splunklib.client.Configurations  If you have experience with Splunk SDK, please share it with me. Thank!
I think the OP's test_MID_IP.csv contains test_IP, not IP. (Although it doesn't need to be.)  It doesn't need count but may (or may not) need MID.  Also,  the append option is needed for the table pr... See more...
I think the OP's test_MID_IP.csv contains test_IP, not IP. (Although it doesn't need to be.)  It doesn't need count but may (or may not) need MID.  Also,  the append option is needed for the table preserve all data. index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup test_MID_IP.csv test_IP as IP OUTPUT test_IP | where isnull(test_IP) | dedup IP | rename IP as test_IP | fields test_IP MID ``` omit MID if that's not needed ``` | outputlookup append=true test_MID_IP.csv  
Thank you for your commentary. I appriciate it.
Hi @ianthomas, Looks like that is a bug - I've never noticed it because my computer and Splunk time zones have always been in sync. The issue comes because the "now" time is generated via JavaScr... See more...
Hi @ianthomas, Looks like that is a bug - I've never noticed it because my computer and Splunk time zones have always been in sync. The issue comes because the "now" time is generated via JavaScript, and so it obeys the time zone of the local PC. I can make a tweak to ensure that the time zone set by Splunk is used to generate "now".   I'll let you know when I've posted an update to the app on Splunkbase. Cheers. Daniel  
Hi @heskez try using the join command: <left-dataset> | join left=L right=R where L.pid = R.pid <right-dataset>  https://docs.splunk.com/Documentation/SCS/current/SearchReference/JoinCommandOvervi... See more...
Hi @heskez try using the join command: <left-dataset> | join left=L right=R where L.pid = R.pid <right-dataset>  https://docs.splunk.com/Documentation/SCS/current/SearchReference/JoinCommandOverview 
@kmjefferson42 if that dashboard was made with Splunk 7, then you probably would have to add version="1.1" to the <form> header. I don't think there's anything problematic with copy/pasting the old d... See more...
@kmjefferson42 if that dashboard was made with Splunk 7, then you probably would have to add version="1.1" to the <form> header. I don't think there's anything problematic with copy/pasting the old dashboard XML to a new dashboard
Try something like this:   index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=in... See more...
Try something like this:   index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" |sort - "CPU %"|head 10 | lookup bd_users_hierarchy.csv user OUTPUT user, email as user_email, UserName, Director, VP,Director_email,VP_email | join left=L right=R where L.user=R.app_id [index = imdc_ops_13m sourcetype = usecase_contact app_id="*" | dedup app_id | table _time app_id app_owner app_team_dl]    
hey @tscroggins thanks for replying. Is there anything that you know about this particular error? That error code:(_ssl. c:1106) is there a splunk guide for these errors?  
Hi @MK2 the monitoring console is ostensibly the best place to check your forwarder versions, although keep in mind all the data there is populated by internal Splunk searches, so you can actually se... See more...
Hi @MK2 the monitoring console is ostensibly the best place to check your forwarder versions, although keep in mind all the data there is populated by internal Splunk searches, so you can actually search the data yourself if you need a different visualization, for example. E.g. index="_internal" source="*metrics.lo*" group=tcpin_connections | dedup guid| eval sourceHost=if(isnull(hostname), sourceHost,hostname) | eval connectionType=case(fwdType=="uf","universal forwarder", fwdType=="lwf", "lightweight forwarder",fwdType=="full", "heavy forwarder", connectionType=="cooked" or connectionType=="cookedSSL","Splunk forwarder", connectionType=="raw" or connectionType=="rawSSL","legacy forwarder")| eval build=if(isnull(build),"n/a",build) | eval version=if(isnull(version),"pre 4.2",version) | eval guid=if(isnull(guid),sourceHost,guid) | eval os=if(isnull(os),"n/a",os)| eval arch=if(isnull(arch),"n/a",arch) | table sourceHost connectionType sourceIp sourceHost ssl ack build version os arch guid
I assume the answer is to check Forwader management on setting or to check Forwader Deployment: in monitoring console. Is there any other way?
In the "better late than never" category of answers (and I realize this answer might not have been available in previous versions of Splunk)... It's unclear, from the original question, if the "ip... See more...
In the "better late than never" category of answers (and I realize this answer might not have been available in previous versions of Splunk)... It's unclear, from the original question, if the "ip:port" belongs to the service, or the client. If it belongs to the service, then every timeout uniquely identifies the service, and all that needs to be done is to count the timeouts, and then map in the service name: | makeresults | eval data="CONNECTION-1.1.1.1:1: connect() timeout,[service_with_2_timeouts] tearing down tcp connection [1.1.1.1.1],CONNECTION-1.1.1.2:2: connect() timeout,[service_with_1_timeout] tearing down tcp connection [1.1.1.2.2],[service_with_no_timeouts] tearing down tcp connection [1.1.1.3.3],CONNECTION-1.1.1.1:1: connect() timeout,[service_with_2_timeouts] tearing down tcp connection [1.1.1.1.1]" | eval mvdata=split(data,",") | mvexpand mvdata ``` Everything above this is to generate sample data ``` | eval is_timeout=if(like(mvdata,"%connect() timeout%"),1,0) | rex field=mvdata "CONNECTION-(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(?<port>\d+): connect\(\) timeout" | rex field=mvdata "\[(?<service_name>[^\]]+)\] tearing down tcp connection \[(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.(?<port>\d+)\]" | stats first(service_name) as service_name, sum(is_timeout) as timeout_count by ip, port   If, on the other hand, the "ip:port" belong to the client accessing the service, this is a bit more complicated, with too many potential solutions depending on details not available here.