All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, below are the log details. index=ABC sourcetype=logging_0 Below are the values of "ErrorMessages" field: invalid - 5 count unprocessable - 7 count (5 invalid pair + 2 others) no user foundv... See more...
Hi, below are the log details. index=ABC sourcetype=logging_0 Below are the values of "ErrorMessages" field: invalid - 5 count unprocessable - 7 count (5 invalid pair + 2 others) no user foundv- 3 count invalid message process - 3 count process failed- 3 count   Now I have to eliminate ErrorMessage=invalid and ErrorMessage=unprocessable. Then show all other  ErrorMessage. But the problem here is , "unprocessable" ErrorMessage will show for other messages as well. so we cannot fully eliminate the "unprocessable" ErrorMessage. Whenever "Invalid" ErrorMessage is logging that time "unprocessable" ErrorMessage also will be logged. So we need to eliminate this pair only. Not every "unprocessable" ErrorMessage.   Expected result: unprocessable - 2 count no user foundv- 3 count invalid message process - 3 count process failed- 3 count   I tried with join using requestId but its not resulting anything because i am using | search ErrorMessage="Invalid" and elimated this in next query so its not searching for other ErrorMessages.   Can someone please help.    
hello, Could anyone assist me in creating a correlation search to detect triggered alerts across all searches. This will enable us to monitor counts and automatically notify us if any situation esca... See more...
hello, Could anyone assist me in creating a correlation search to detect triggered alerts across all searches. This will enable us to monitor counts and automatically notify us if any situation escalates beyond control. Thanks
Hi All, Need a help to write a query based on the field "Timestamp" which is different from "_time" value. Sample Event in XML Format: Email: xyz@gmail.com RoleName: User RowKey: 123456 Timesta... See more...
Hi All, Need a help to write a query based on the field "Timestamp" which is different from "_time" value. Sample Event in XML Format: Email: xyz@gmail.com RoleName: User RowKey: 123456 Timestamp: 2023-12-13T23:56:18.200016+00:00 UserId: mno UserName: acho This is one of the sample event in xml format and there is a specific field as "Timestamp" in the event and this "Timestamp" field is completely different from _time value. Hence I want to pull only the "Timestamp" value for a particular day might be yesterday 2023-12-13 i.e. from 2023-12-13 00:00:00 to 2023-12-13 23:59:59 So how can I write the query for the same. index=abc host=xyz sourcetype=xxx
Hi Team,I am using a query which has same index and source but fetch two results based on the search and combine to a single table..now i want to display the result along with the timestamp it appear... See more...
Hi Team,I am using a query which has same index and source but fetch two results based on the search and combine to a single table..now i want to display the result along with the timestamp it appears in ascending order index=index1 source=source1 CASE("latest") AND "id" AND "dynamoDB data retrieved for ids" AND "material"| eval PST=_time-28800 | eval PST_TIME3=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=dataNotFoundIdsCount path=dataNotFoundIdsCount | stats values(*) as * by _raw | table dataNotFoundIdsCount, PST_TIME3 | sort- PST_TIME3| appendcols [search index=index1 source=source1 CASE("latest") AND "id" AND "sns published count" AND "material"| eval PST=_time-28800 | eval PST_TIME4=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath snsPublishedCount output=snsPublishedCount |spath output=republishType path=republishType| spath output=version path=republishInput.version| spath output=publish path=republishInput.publish| spath output=nspConsumerList path=republishInput.nspConsumerList{} | spath output=objectType path=republishInput.objectType | stats values(*) as * by _raw | table snsPublishedCount,republishType,version,publish, nspConsumerList,objectType,PST_TIME4 | sort- PST_TIME4 ] |table PST_TIME4 objectType version republishType publish nspConsumerList snsPublishedCount dataNotFoundIdsCount  
Hi All, I am facing error using wildcard in multivalue field. I am using mvfind to find a string.     eval test_loc=case(isnotnull(Region,%bangalore%), Bangalore)     I am just giving pa... See more...
Hi All, I am facing error using wildcard in multivalue field. I am using mvfind to find a string.     eval test_loc=case(isnotnull(Region,%bangalore%), Bangalore)     I am just giving part of eval statement here Example  : Region =  "sh bangalore Test" The above eval statement should work on this Region and set test_loc = Bangalore. I tried passing * and % (*bangalore*, %bangalore%) , but am getting error.  Please help me. Thanks , poojitha NV
Hello! I'm new to splunk so any help is much appreciated. I have two queries of different index.  Query1: index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.i... See more...
Hello! I'm new to splunk so any help is much appreciated. I have two queries of different index.  Query1: index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.id"="*" | dedup custom_data.result.id | timechart span=1h count   Query2: index=leads host="pa*" seller_summary | spath input="Data" | search "0.lead.form.page_name"="seller_summary" | dedup 0.id | timechart span=1h count I would like to write a query that executes Query1-Query2 for the counts in each hour. It should be in the same format. Thank you!!
Hi,  I need help in a splunk search.  My requirement is get the stats for failed and successful count along with the percentage of Failed and  Successful  and at last I would need to fetch the stat... See more...
Hi,  I need help in a splunk search.  My requirement is get the stats for failed and successful count along with the percentage of Failed and  Successful  and at last I would need to fetch the stats only when the failed % is > 10 % My query works fine  until the below index=abcd | eval status= case(statuscode < 400, "Success", statuscode > 399,"Failed") | stats count(status) as TOTAL  count(eval(status="Success")) as Success_count  count(eval(status="Failed")) as Failed_count  by Name, URL | eval Success%= ((Success_count /TOTAL)*100) | eval Failed%= ((Failed_count /TOTAL)*100) The above works and I get the table with Name URL TOTAL  Success_count   Failed_count   Success% Failed% Now, when I add the below to the above query, It fails  | where Failed% > 10 How do I get the failed% > 10 with the above table. Please assist
I have a data like this. {     env: prod    host: prod01    name: appName    info: {       data: [ ...      ]      indicators: [         {           details: {              A.runTime: 434 ... See more...
I have a data like this. {     env: prod    host: prod01    name: appName    info: {       data: [ ...      ]      indicators: [         {           details: {              A.runTime: 434            A.Count: 0            B.runTime: 0            B.Count: 0            ....                     }          name: timeCountIndicator          status: UP        }        {           details: {             A.downCount: 2            A.nullCount: 0            B.downCount: 0            B.nullCount: 0            ....                   }          name: downCountIndicator          status: UP        }      ]      status: DOWN    }    metrics: { ...    }    ping: 1 } I only want to extract fields in info.indicators{}.details ONLY when info.indicators{}.name of that field is "timeCountIndicator". I tried to use spath combined with table, mvexpand and where ... | spath path=info.indicators{} output=indicators | table indicators |mvexpand indicators| where match(indicators,"timeCountIndicator") It returns a record as a string, however. And it's really hard to convert string back to fields which is hard to process. (Technically extract/rex can deal with it, but it takes a REALLY long time to extract every fields in details when I need only some fields) Is there any way to deal with this in the easier way?
Hello - I have several dashboards that are presenting the user with a pop up box    Reviewing the Browser Console, I see the following: The culprit seems to be common.js Th... See more...
Hello - I have several dashboards that are presenting the user with a pop up box    Reviewing the Browser Console, I see the following: The culprit seems to be common.js The dashboard is already using the version=1.1, that I have seen in other posts.  The dashboard doesn't reference any .js scripts nor does it use any lookups to generate results. <form version="1.1" hideEdit="false"> Any suggestions are appreciated.  Thank you. However, this issue persists.  
Hello Splunkers, I am New to Splunk and am trying to figure out how to parse nested JSON data spit out by an end-of-line test.  Here is a sample event: {"serial_number": "PLACEHOLDER1234", "type":... See more...
Hello Splunkers, I am New to Splunk and am trying to figure out how to parse nested JSON data spit out by an end-of-line test.  Here is a sample event: {"serial_number": "PLACEHOLDER1234", "type": "Test", "result": "Pass", "logs": [{"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Disable UGC USB Comm Watchdog", "result": "Pass"}, {"test_name": "Hardware Rev", "result": "Pass", "received": "4"}, {"test_name": "Firmware Rev", "result": "Pass", "received": "1.8.3.99", "expected": "1.8.3.99"}, {"test_name": "Set Serial Number", "result": "Pass", "received": "1 A S \n", "expected": "1 A S"}, {"test_name": "Verify serial number", "result": "Pass", "received": "JC0024EW1482300425", "expected": "JC0024EW1482300425", "reason": "Truncated full serial number: 30913JC0024EW1482300425 to JC0024EW1482300425"}, {"test_name": "Thermocouple", "pt1_ugc": "24969.0", "pt1": "25000", "pt2_ugc": "19954.333333333332", "pt2": "20000", "pt3_ugc": "14993.666666666666", "pt3": "15000", "result": "Pass", "tolerance": "1000 deci-mV"}, {"test_name": "Cold Junction", "result": "Pass", "ugc_cj": "278", "user_temp": "270", "tolerance": "+ or - 5 C"}, {"test_name": "Glow Plug Open and Short", "result": "Pass", "received": "GP Open, Short, and Load verified OK.", "expected": "GP Open, Short, and Load verified OK."}, {"test_name": "Glow Plug Power On", "result": "Pass", "received": "User validated Glow Plug Power"}, {"test_name": "Glow Plug Measure", "pt1_ugc": "848", "pt1": "2070", "pt1_tolerance": "2070", "pt2_ugc": "5201", "pt2": "5450", "pt2_tolerance": "2800", "result": "Pass"}, {"test_name": "Motor Soft Start", "result": "Pass", "received": "Motor Soft Start verified", "expected": "Motor Soft Start verified by operator"}, {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"}, {"test_name": "Fan", "ugc_rpm": 2436.0, "rpm": 2130, "rpm_t": 400, "ugc_v": 653.3333333333334, "v": 630, "v_t": 160, "result": "Pass"}, {"test_name": "RS 485", "result": "Pass", "received": "All devices detected", "expected": "Devices detected: ['P']"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "DFU Test", "result": "Pass", "received": "Found DFU device"}, {"test_name": "Power Cycle", "result": "Pass", "received": "User confirmed power cycle"}, {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "USB Power", "result": "Pass", "received": "USB Power manually verified"}]}  I want to be able to extract the test data (all key-value pairs) from each test. Ideally would like to create dashboard charts showing response from Motor and Fan tests among others.   Here is a sample search i have been using which allows me to create a table with the serial number, overall test result, individual test name, and individual test result index="factory_mtp_events" | search sourcetype="placeholder" source="placeholder" serial_number="PLACEHOLDER*"| spath logs{} output=logs| stats count by serial_number result logs| eval _raw=logs| spath test_name output=test_name |spath result output=test_result| table serial_number result test_name test_result How Can I index into the logs{} section and pull out all results dependent on test_name? So, how can I query for logs{}.test_name="Motor" and have the result yield : {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"},  
I have this search query and working fine. index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user... See more...
I have this search query and working fine. index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="test" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("test.aspx") | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_today" by pp_user_action_name | stats count(pp_user_action_response) As "Today_Calls",perc90(pp_user_action_response) AS "Perc90_today" by pp_user_action_name Avg_today | eval Perc90_today=round(Perc90_today/1000,2)| eval Avg_today=round(Avg_today/1000,2) | table pp_user_action_name,Today_Calls,Avg_today,Perc90_today PFA screenshot for the results. Now i am trying to pass the pp_user_action_name value from the test.csv file and not getting any results  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="test" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ([| inputlookup test.csv]) | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_today" by pp_user_action_name | stats count(pp_user_action_response) As "Today_Calls",perc90(pp_user_action_response) AS "Perc90_today" by pp_user_action_name Avg_today | eval Perc90_today=round(Perc90_today/1000,2)| eval Avg_today=round(Avg_today/1000,2) | table pp_user_action_name,Today_Calls,Avg_today,Perc90_today How to fix this?  thanks in advance. 
  I have configured the APP for microsoft 365 which was working properly but it stopped working and after checking it was found that one of the keys or certificates had expired. I contacted the adm... See more...
  I have configured the APP for microsoft 365 which was working properly but it stopped working and after checking it was found that one of the keys or certificates had expired. I contacted the administrator asking him for the "Client Secret" and he gave me the information but he also asks for the "Cloud App Security Token" field and I really have no idea what information I should ask the administrator for. I would be grateful if you could explain me if it is possible. Thanks  
Hi, Could someone assist me in setting the threshold for this correlation search in ES? It's generating an excessive number of notables over the last 7 days, roughly around 30k. How can we reduce th... See more...
Hi, Could someone assist me in setting the threshold for this correlation search in ES? It's generating an excessive number of notables over the last 7 days, roughly around 30k. How can we reduce the number of notables? Additionally, I've provided the bytes_out data for the last 24 hrs. Please set the threshold based on that data. | tstats `summariesonly` count values(sourcetype) AS sourcetype, values(All_Traffic.src_zone) AS src_zone, earliest(_time) as earliest, latest(_time) as latest, values(All_Traffic.action) AS action. values(All_Traffic.bytes_out) AS bytes_out, values(All_Traffic.bytes_in) AS bytes_in, sum(All_Traffic.bytes) AS bytes, values(All_Traffic.direction) AS direction, values(All_Traffic.app) AS app, from datamodel=Network_Traffic ("bytes_out" 163 594 594 594 594 294 686 215 392 392 98 954 215 86 424 900 530 594 594 117 294 882 148 258 320 594 516 142 215 159 215 86 98 98 369 401 159 215 215 594 212 215 220 585 203 594 680 212 159 159 159 159 159 718 159 159 159 159 594 221 146 318 318 159 159 318 318 318 318 159 159 159 159 159 159 636 318 159 159 159 159 159 159 159 159 159 159 159 159 159 159 318 159 318 318 318 318 326 159 159 753 159 326 657 912 159 318 159 159 159 159 159 318 148 148 814 594 320 159 159 159 159 159 159 159 159 159 318 318 159 795 318 318 159 159 565 870 159 321 912 318 318 508 159 159 567 487 159 836 507 159 159 318 477 318 318 159 159 318 318 318 477 246 155 594 594 594 594 594 594 99 159 159 222 241 159 438 565 400 159 159 159 318 795 148 119 667 159 479 486 477 477 406 828 477 222 222 148 753 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 594 784 323 594 318 159 388 318 318 711 318 388 159 159 159 159 350 350 318 318 560 318 318 719 318 646 620 159 801 159 620 159 779 318 912 318 318 318 318 318 318 323 641 810 318 318 318 323 620 318 620 318 870 159 159 159 620 461 318 318 779 318 870 159 870 323 388 318 318 870 318 350 832 318 159 318 318 810 318 159 318 318 318 318 318 733 318 323 323 323 651 159 159 318 318 318 318 318 318 159 159 159 159 159 159 159 159 159 159 159 159 318 318 318 318 159 159 159 159 159 159 159 159 318 159 159 159 159 159 159 159 318 159 319 318 318 665 935 356 574 197 197 201 159 477 477 963 477 486 159 318 159 594 155 824 400 350 318 477 222 159 222 296 518 666 318 477 171 318 318 159 159 159 159 155 318 318 318 318 477 159 159 159 159 318 318 159 318 159 159 318 722 318 318 439 549 328 477 159 318 964 603 318 318 159 159 196 370 148 753 159 159 569 159 765 477 594 370 370 318 318 636 318 466 587 428 444 159 148 148 159 159 159 159 159 159 159 159 159 159 159 159 159 753 594 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 477 159 758 326 979 159 159 318 318 318 318 318 594 318 318 159 318 159 318 159 159 159 159 159 159 159 159 318 318 318 318 159 159 636 159 159 679 159 753 667 318 318 318 159 159 159 159 753 331 331 318 159 649 159 353 353 159 159 512 159 326 955 159 753 159 326 326 159 159 912 753 159 159 594 325 325 318 318 912 159 318 159 318 326 159 159 753 159 326 924 318 943 159 665 159 594 594 400 159 159 159 159 159 159 159 159 159 159 159 159 159 159 908 222 439 525 318 159 603 159 159 148 222 318 318 728 318 318 159 159 159 159 155 155)
  I have a multivalue field, which I would like to expand to individual fields, like so: | makeresults count=1 | eval a=mvappend("1","7") | eval a_0=mvindex(a,0,0) | eval a_1=mvindex(a,1,1) Howev... See more...
  I have a multivalue field, which I would like to expand to individual fields, like so: | makeresults count=1 | eval a=mvappend("1","7") | eval a_0=mvindex(a,0,0) | eval a_1=mvindex(a,1,1) However, the length might be >2 and I would like to have a generic solution to do this. I know I can create a MV field with an index and use mvexpand and then stats to get all back into a single event, but I run into memory issues with this in my own data.    In short: not use mvexpand and solve the issue in a generic fashion.      
Hi. We noticed a few of our RHEL8 servers with splunkforwarder installed logs the line (pasted below) up to thousands of times, causing splunkd.log files to grow excessively and fill the /opt direct... See more...
Hi. We noticed a few of our RHEL8 servers with splunkforwarder installed logs the line (pasted below) up to thousands of times, causing splunkd.log files to grow excessively and fill the /opt directory. Sometimes it occurs every few seconds, while other times it will log hundreds of times per second. So far there are only a handful of servers experiencing the problem, and we have many others on running the same version and OS. 09-17-2023 20:33:50.029 +0000 ERROR BTreeCP [2386469 TcpOutEloop] - failed: failed to mkdir /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db/snapshot.tmp: File exists Doing a restart of the splunkforwarder service mitigates the problem temporarily, but the error occurs again within a few days. When the error messages come in, the directory already exists and contains files: # ls /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db/snapshot.tmp/ btree_index.dat btree_records.dat We are not sure what causes the issue or how to reproduce it.
Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" ro... See more...
Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" role able to access other app features?  Expectation is to see what users with "admin" role see: What have I done wrong? Thank you and best regards, Andrew
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failu... See more...
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failure EventCode=4771 OR EventCode=4776 | bucket _time span=30m | stats count by user | where count>500 I want to make sure this is correct.   Thanks!
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_c... See more...
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_connections x.x.x.x:50496:9997 connectionType=cookedSSL sourcePort=50496 sourceHost=x.x.x.x sourceIp=x.x.x.x destPort=9997 kb=15.458984375 _tcp_avg_thruput=7.262044477222557 _tcp_Kprocessed=589.84765625 [...]   It's the "tcp_Kprocessed" field, especially related to the field "kb", which is the most important, in my opinion. What is in practice "tcp_Kprocessed", considering that its values are often very inconsistent and not proportionate to the kb? Thanks.
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05... See more...
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05.12.2023 10:43:27","-1","-1","-1","-1","-1","-1","-1" "05.12.2023 10:41:17",0,320,0,0,"-1",0,0 "05.12.2023 10:30:32",0,0,1,0,"-1",0,0 "05.12.2023 10:28:51",0,0,1,0,"-1",0,0 "05.12.2023 10:28:10","-1","-1","-1","-1","-1","-1","-1" Lookup Attribut,Value,ErrorCode sFaultInverter1,-1,NoCommunication sFaultInverter1,0,noError sFaultInverter1,1,CompressorCurrentSensorFault sFaultInverter1,2,FactorySettings sFaultInverter1,4, sFaultInverter1,8, sFaultInverter1,16,InverterBridgeTemperatureSensorFault sFaultInverter1,32,DLTSensorFault sFaultInverter1,64,ICLFailure sFaultInverter1,128,EEPROMFault sFaultInverter1,256,UpdateProcess sFaultInverter1,512, sFaultInverter1,1024, sFaultInverter1,2048, sFaultInverter1,4096, sFaultInverter1,8129, sFaultInverter1,16384, sFaultInverter1,32768, sFaultInverter2,-1,NoCommunication sFaultInverter2,0,noError sFaultInverter2,1,CommunicationLos sFaultInverter2,2,DcLinkRipple sFaultInverter2,4, sFaultInverter2,8,AcGridOverVtg sFaultInverter2,16,AcGridUnderVtg sFaultInverter2,32,DcLinkOverVtgSW sFaultInverter2,64,DcLinkUnderVtg sFaultInverter2,128,SpeedFault sFaultInverter2,256,AcGridPhaseLostFault sFaultInverter2,512,InverterBridgeOverTemperature sFaultInverter2,1024, sFaultInverter2,2048, I would like to have table with e.G. 3 columns: "nice_date",sFaultInverter1,ErrorCode "05.12.2023 10:46:53",0,noError "05.12.2023 10:43:27","-1",NoCommunication "05.12.2023 10:41:17",0,noError "05.12.2023 10:30:32",0,noError "05.12.2023 10:28:51",0,noError "05.12.2023 10:28:10","-1",NoCommunication for each value of sFaultInverter1 an ErrorCode from the lookUp table. Any help?
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data... See more...
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data><Data>signature_name</Data><Data>binary_description</Data> How can I extract it automatically to fields/value: process_name = process_name signature = signature_name binary = binary_description   Is there any way without using "big" regex? to just $1:$2:$3.. and then add names to $1, $2, $3 like for CSV. something like:  REGEX = (?ms)<Data>(.*?)<\/Data> this will create maybe one multi value field and then assign Field_name