All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank @KendallW , that is good reference to CRUD with KOs in the Splunk but the script is out of date, some function did not work anymore.  And he used REST API instead of SDK.  I am asking about th... See more...
Thank @KendallW , that is good reference to CRUD with KOs in the Splunk but the script is out of date, some function did not work anymore.  And he used REST API instead of SDK.  I am asking about the Splunk SDK.
Hi @tatdat171 Check out this script by @harsmarvania57  https://github.com/harsmarvania57/splunk-ko-change 
I don't think you are looking to join two searches because the two searches operate on the same data source and in the same time interval.  What you want is to connect a transaction request and an OR... See more...
I don't think you are looking to join two searches because the two searches operate on the same data source and in the same time interval.  What you want is to connect a transaction request and an ORA-00001 error if it happens.  ORA-00001, if it happens, should be directly following that transaction request. (Otherwise your problem is unsolvable.)  The _raw field in your last output actually represents the error message you want to display, not so much raw events. In other words, from the data you illustrated, you want something like _time transaction_id error_log 2024-06-14 04:35:50 48493009394940303 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription: (Important: When you say "output as", you should illustrate actual output (anonymize as needed) of a search, not just field names.) This should get what you wanted:   index=test_index source=/test/instance ("<=== Recv'd TRN:" OR "ORA-00001") | rex field=_raw "\<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | transaction startswith="<=== Recv'd TRN:" endswith="ORA-00001" maxevents=2 | fields _* transaction_id | eval error_log = split(_raw, " ") | mvexpand error_log | where match(error_log, "ORA-00001") | table _time transaction_id error_log   For this type of problem, transaction is appropriate. Here is an emulation for you to play with and compare with real data:   | makeresults | eval data = mvappend("240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: 48493009394940303 (TQ_HOST -> TQ_HOST)", "240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure.", "240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription:") | mvexpand data | rename data AS _raw | rex "^(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%y%m%d %T") | sort - _time ``` the above emulates index=test_index source=/test/instance ("<=== Recv'd TRN:" OR "ORA-00001") ```  
Hi everyone, I am writing the Python script to move the knowledge object to another app. For example, with the Lookup file: I can do it with the REST API endpoint  servicesNS/<user>/<app_name>/dat... See more...
Hi everyone, I am writing the Python script to move the knowledge object to another app. For example, with the Lookup file: I can do it with the REST API endpoint  servicesNS/<user>/<app_name>/data/lookup-table-files/<lookup_file_name/move   But I can't find the function to do that with the Splunk Python SDK. https://docs.splunk.com/DocumentationStatic/PythonSDK/1.7.4/client.html#splunklib.client.Configurations  If you have experience with Splunk SDK, please share it with me. Thank!
I think the OP's test_MID_IP.csv contains test_IP, not IP. (Although it doesn't need to be.)  It doesn't need count but may (or may not) need MID.  Also,  the append option is needed for the table pr... See more...
I think the OP's test_MID_IP.csv contains test_IP, not IP. (Although it doesn't need to be.)  It doesn't need count but may (or may not) need MID.  Also,  the append option is needed for the table preserve all data. index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup test_MID_IP.csv test_IP as IP OUTPUT test_IP | where isnull(test_IP) | dedup IP | rename IP as test_IP | fields test_IP MID ``` omit MID if that's not needed ``` | outputlookup append=true test_MID_IP.csv  
Thank you for your commentary. I appriciate it.
Hi @ianthomas, Looks like that is a bug - I've never noticed it because my computer and Splunk time zones have always been in sync. The issue comes because the "now" time is generated via JavaScr... See more...
Hi @ianthomas, Looks like that is a bug - I've never noticed it because my computer and Splunk time zones have always been in sync. The issue comes because the "now" time is generated via JavaScript, and so it obeys the time zone of the local PC. I can make a tweak to ensure that the time zone set by Splunk is used to generate "now".   I'll let you know when I've posted an update to the app on Splunkbase. Cheers. Daniel  
Hi @heskez try using the join command: <left-dataset> | join left=L right=R where L.pid = R.pid <right-dataset>  https://docs.splunk.com/Documentation/SCS/current/SearchReference/JoinCommandOvervi... See more...
Hi @heskez try using the join command: <left-dataset> | join left=L right=R where L.pid = R.pid <right-dataset>  https://docs.splunk.com/Documentation/SCS/current/SearchReference/JoinCommandOverview 
@kmjefferson42 if that dashboard was made with Splunk 7, then you probably would have to add version="1.1" to the <form> header. I don't think there's anything problematic with copy/pasting the old d... See more...
@kmjefferson42 if that dashboard was made with Splunk 7, then you probably would have to add version="1.1" to the <form> header. I don't think there's anything problematic with copy/pasting the old dashboard XML to a new dashboard
Try something like this:   index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=in... See more...
Try something like this:   index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" |sort - "CPU %"|head 10 | lookup bd_users_hierarchy.csv user OUTPUT user, email as user_email, UserName, Director, VP,Director_email,VP_email | join left=L right=R where L.user=R.app_id [index = imdc_ops_13m sourcetype = usecase_contact app_id="*" | dedup app_id | table _time app_id app_owner app_team_dl]    
hey @tscroggins thanks for replying. Is there anything that you know about this particular error? That error code:(_ssl. c:1106) is there a splunk guide for these errors?  
Hi @MK2 the monitoring console is ostensibly the best place to check your forwarder versions, although keep in mind all the data there is populated by internal Splunk searches, so you can actually se... See more...
Hi @MK2 the monitoring console is ostensibly the best place to check your forwarder versions, although keep in mind all the data there is populated by internal Splunk searches, so you can actually search the data yourself if you need a different visualization, for example. E.g. index="_internal" source="*metrics.lo*" group=tcpin_connections | dedup guid| eval sourceHost=if(isnull(hostname), sourceHost,hostname) | eval connectionType=case(fwdType=="uf","universal forwarder", fwdType=="lwf", "lightweight forwarder",fwdType=="full", "heavy forwarder", connectionType=="cooked" or connectionType=="cookedSSL","Splunk forwarder", connectionType=="raw" or connectionType=="rawSSL","legacy forwarder")| eval build=if(isnull(build),"n/a",build) | eval version=if(isnull(version),"pre 4.2",version) | eval guid=if(isnull(guid),sourceHost,guid) | eval os=if(isnull(os),"n/a",os)| eval arch=if(isnull(arch),"n/a",arch) | table sourceHost connectionType sourceIp sourceHost ssl ack build version os arch guid
I assume the answer is to check Forwader management on setting or to check Forwader Deployment: in monitoring console. Is there any other way?
In the "better late than never" category of answers (and I realize this answer might not have been available in previous versions of Splunk)... It's unclear, from the original question, if the "ip... See more...
In the "better late than never" category of answers (and I realize this answer might not have been available in previous versions of Splunk)... It's unclear, from the original question, if the "ip:port" belongs to the service, or the client. If it belongs to the service, then every timeout uniquely identifies the service, and all that needs to be done is to count the timeouts, and then map in the service name: | makeresults | eval data="CONNECTION-1.1.1.1:1: connect() timeout,[service_with_2_timeouts] tearing down tcp connection [1.1.1.1.1],CONNECTION-1.1.1.2:2: connect() timeout,[service_with_1_timeout] tearing down tcp connection [1.1.1.2.2],[service_with_no_timeouts] tearing down tcp connection [1.1.1.3.3],CONNECTION-1.1.1.1:1: connect() timeout,[service_with_2_timeouts] tearing down tcp connection [1.1.1.1.1]" | eval mvdata=split(data,",") | mvexpand mvdata ``` Everything above this is to generate sample data ``` | eval is_timeout=if(like(mvdata,"%connect() timeout%"),1,0) | rex field=mvdata "CONNECTION-(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(?<port>\d+): connect\(\) timeout" | rex field=mvdata "\[(?<service_name>[^\]]+)\] tearing down tcp connection \[(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.(?<port>\d+)\]" | stats first(service_name) as service_name, sum(is_timeout) as timeout_count by ip, port   If, on the other hand, the "ip:port" belong to the client accessing the service, this is a bit more complicated, with too many potential solutions depending on details not available here.
Hi @bmanikya please confirm if my understanding is correct: You want to match the "user" field from the first screenshot with the "user" field from the bd_users_hierarchy.csv lookup, and the "app_id... See more...
Hi @bmanikya please confirm if my understanding is correct: You want to match the "user" field from the first screenshot with the "user" field from the bd_users_hierarchy.csv lookup, and the "app_id" field from the third screenshot?
If I'm reading this right, you have data that has events with pods and their phases. In your example query, you appear to be using decimal values to create your ranges, but can we assume that the act... See more...
If I'm reading this right, you have data that has events with pods and their phases. In your example query, you appear to be using decimal values to create your ranges, but can we assume that the actual pos states fall on specific integers? Something like this might work:   | makeresults count=25 | eval phase=(random()%5)+1 ``` Everything above here is just to create sample data ``` ``` The following statement groups and counts phases. | stats count by phase ``` The following statement maps phases to a string equivalent ``` | eval label=case(phase=1,"A (Pending)", phase=2,"B (Running)", phase=3,"C (Succeeded)", phase=4,"D (Failed)", phase=5,"E (Stopping?)", 1=1,"Unknown")   If the phase values are not discreet, and the range you mention is necessary, then you can use a case statement like this:   | makeresults count=25 | eval phase=((random()%50)/10)+1 | eval phase_group=case(phase<1.5,1, phase<2.5,2, phase<3.5,3, phase<4.5,4, phase<5.5,5) | stats count by phase_group | eval label=case(phase_group=1,"A (Pending)", phase_group=2,"B (Running)", phase_group=3,"C (Succeeded)", phase_group=4,"D (Failed)", phase_group=5,"E (Stopping?)", 1=1,"Unknown")    
Could you please share a screenshot?
Assume for the moment that these work individually: Outputs1 [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) ... See more...
Assume for the moment that these work individually: Outputs1 [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) useSSL = true [indexer_discovery:company] pass4SymmKey = passhere manager_uri = https://clustermanager:8089 [tcpout:primary_indexers] indexerDiscovery = company sslCertPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cacert.pem Outputs2 [tcpout] defaultGroup = heavy_forwarders forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) useSSL = true [tcpout:primary_heavy_forwarders] server = y.y.y.y:9997 sslCertPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercacert.pem If I understand the documentation correctly all we would need to do is this: [tcpout] defaultGroup = primary_indexers, primary_heavy_forwarders forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) useSSL = true [indexer_discovery:company] pass4SymmKey = passhere manager_uri = https://clustermanager:8089 [tcpout:primary_indexers] indexerDiscovery = company sslCertPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cacert.pem [tcpout:primary_heavy_forwarders] server = y.y.y.y:9997 sslCertPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercacert.pem Is this correct? In this configuration the exact same data would be flowing to both destinations? There would be no issues binding the certifcates to different stanzas? I appreciate the responses.