All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Assuming your regex well extracts the fields you want, could you try this: # transform.conf: [my-log] REGEX=FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV_ADD=true # props.conf [extractingFields] TRANSFORM... See more...
Assuming your regex well extracts the fields you want, could you try this: # transform.conf: [my-log] REGEX=FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV_ADD=true # props.conf [extractingFields] TRANSFORMS-ArbitraryName1 = my-log TRANSFORMS-ArbitraryName2 = other_transforms_stanza Note that the MV ADD field is MV_ADD, not MV-AD
Still no dice on that.  It only happens to these few logs that are formatted this way.  Could there be anything else preventing it from breaking apart properly?    
Hi @tscroggins  Really appreciate your comments, I'm currently working with the changes You've suggested.  Thanks and Regards,
Hello. How do i determine what the original source name may be?
We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that th... See more...
We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that the metric indexes cannot be optimised fast enough. A typical query looks like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | bin _time AS day span=24h aligntime=@d+3h | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId     The main contributor to the large number of events is the cardinality of deviceId (~100k) which effectively is a "MAC" address with a common prefix and defined length. I could create 4 / 8 /16 reports each selecting a subset of deviceIds and schedule them at different times, but it would be quite a burden to maintain those basicly identical copies. So... I wonder if there is a mechanism to shard the search results and feed them it into many separate mcollects that are spaced apart by some delay. Something like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | shard by deviceId bins=10 sleep=60s | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId   Maybe my pseudo code above is not so clear. What I would like to achieve is, that instead of one huge mcollect I get 10 mcollects (each for a approximately 1/10th of the events). They should be scheduled approximately 60s apart from each other...
What @PickleRick points out is that event snippets in your illustration do not contain necessary fields used in your search. (Side lesson #1: Screenshots do not help anything except in explaining exp... See more...
What @PickleRick points out is that event snippets in your illustration do not contain necessary fields used in your search. (Side lesson #1: Screenshots do not help anything except in explaining expected and actual visualization.)  Let me demonstrate with the followi First of all, none of your illustrations explains where the JSON path content.payload{} comes from.  You subsequently put this extracted field in mvexpand.  Splunk will give you an error about nonexistent field content.payload{}.  Until you can demonstrate that this JSON path exist somewhere in your data, your illustrated full search cannot succeed. (Side lesson #2: Complicated SPL does not help diagnosis.  Not only do they discourage others from reading and understanding your message, they also blur your own thought process.  Distill the search to the point where you can clearly illustrate a "yes"-"no" choice.) Secondly, your illustrations do not produce any value for JobType, which according to your search, comes from   | eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand", like('message',"%API: START: /v1/expense/extract/ondemand%"),"OnDemand", like('message',"Expense Extract Process started%"),"Scheduled")   In other words, none of your illustrated JSON match any of the three conditions, therefore | where JobType!=" " will give you no result. To illustrate the above two points, let's comment out the problematic portions of the SPL and see what comes out from your data snippets:   | search NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") ```| spath content.payload{} | mvexpand content.payload{} ``` |stats values(content.SourceFileName) as SourceFileName values(content.JobName) as JobName values(content.loggerPayload.archiveFileName) as ArchivedFileName values(message) as message min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | rex field=message max_match=0 "Expense Extract Process started for (?<FileName>[^\n]+)" | rex field=message max_match=0 "API: START: /v1/expense/extract/ondemand/(?<OtherRegion>[^\/]+)\/(?<OnDemandFileName>\S+)" | eval OtherRegion=upper(OtherRegion) | eval OnDemandFileName=rtrim(OnDemandFileName,"Job") | eval "FileName/JobName"= coalesce(OnDemandFileName,JobName) | eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"%API: START: /v1/expense/extract/ondemand%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled") | eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('tracePoint',"%EXCEPTION%"),"ERROR") | eval Region= coalesce(Region,OtherRegion) | eval OracleRequestId=mvappend("RequestId:",RequestID,"ImpConReqid:",ImpConReqId) | eval Response= coalesce(message,error,errorMessage) | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged,"Match","NotMatch") | rename Logon_Time as Timestamp | table Status JobType Response ArchivedFileName ElapsedTimeInSecs "Total Elapsed Time" correlationId | fields - ElapsedTimeInSecs priority match ```| where JobType!=" " | search Status="*"```   Status JobType Response ArchivedFileName TotalElapsedTime correlationId SUCCESS   Before calling flow post-PInvoice-SubFlow Concur AP/GL File/s Process Status PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 755 Operating Unit: BZ_OU PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 725 Operating Unit: AB_OU     19554d60     After calling flow SubFlow PRD(SUCCESS): Concur AP/GL Extract- Expense Report. Concur Batch ID: 450 Company Code: 725 Operating Unit: AB_OU Post - Expense Extract processing to Oracle     43b856a1     After calling flow post-APInvoice-SubFlow Before calling flow post-APInvoice-SubFlow Concur Process Status ISG AP Response PRD(SUCCESS): Concur AP/GL Extract - AP Expense Report. Concur Batch ID: 95 Post - Expense Extract processing to Oracle     9a1219f2 As you can see, only one correlationId has non-null Status, and none of them have any field other than Response.  This is a common troubleshooting technique: reduce search complexity to reveal the parts that make a difference. The following is an emulation of the data snippets you illustrated.  Play with it and compare with your real data   | makeresults | eval data = mvappend("{ \"correlationId\" : \"43b856a1\", \"message\" : \"Post - Expense Extract processing to Oracle\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\" }", "{ \"correlationId\" : \"43b856a1\", \"message\" : \"After calling flow SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\" }", "{ \"correlationId\" : \"43b856a1\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract- Expense Report. Concur Batch ID: 450 Company Code: 725 Operating Unit: AB_OU\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\" }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 755 Operating Unit: BZ_OU\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"Concur AP/GL File/s Process Status\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 725 Operating Unit: AB_OU\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"Before calling flow post-PInvoice-SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"Before calling flow post-APInvoice-SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract - AP Expense Report. Concur Batch ID: 95\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"Post - Expense Extract processing to Oracle\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"Concur Process Status\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"ISG AP Response\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"After calling flow post-APInvoice-SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }") | mvexpand data | rename data AS _raw | spath ``` data emulation for index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") ```    
Hello jconger, Thanks for the comment. My system admin did the registration and added the permissions during the original setup.  We're not getting any message trace data. I can ask him to double-ch... See more...
Hello jconger, Thanks for the comment. My system admin did the registration and added the permissions during the original setup.  We're not getting any message trace data. I can ask him to double-check the roles, but as far as I know, it was done properly. 
This source does not seem to match the visualisation you have shown earlier. Are you using a trellis of singles or not?
Hi @Nitesh.Kewat, You are a partner and should be on a paid account, not a trial license. Can you confirm if this is true or not?
Following this thread as well, as I have observed the same issue following an upgrade from 9.1.2 to 9.2.1
Hi, Sending reports in the email body through the AppDynamics API is indeed possible, but it requires the implementation of specific programming logic. I recently received a similar request from on... See more...
Hi, Sending reports in the email body through the AppDynamics API is indeed possible, but it requires the implementation of specific programming logic. I recently received a similar request from one of our customers, asking to send reports directly in the email body. To fulfill this requirement, I developed a Python script. This script utilizes the AppDynamics performance metrics API to fetch relevant data. With programming concepts, I structured this data into a table format and configured SMTP details for sending the email within the script. To automate the process, I set up a cron job to execute this script every 30 minutes. The customer specifically requested server performance metrics to be included in the email body. Please refer to the attached image illustrating how server performance metrics are displayed in the email body, complete with warning and critical indicators for when server performance crosses predefined thresholds. Google drive Link of detailing the coding logic : https://drive.google.com/file/d/1WpPnKtI38VlC7vF3aBHPUXpH3_YJhoK1/view?usp=sharing Thanks, Harshank Patil 
I am not sure what you are asking of me here - your original issue seems to have been solved by @gcusello 
Please explain what is meant by "it's not working".  That phrase does not provide any actionable information.  What are the current results and how do they differ from what you expect? Does the "oth... See more...
Please explain what is meant by "it's not working".  That phrase does not provide any actionable information.  What are the current results and how do they differ from what you expect? Does the "other_transforms_stanza" do anything to the data that might affect the "my-log" stanza? Have you used regex101.com to test the REGEX? The "^.*" construct at the beginning of the regex is meaningless.  Get rid of it.
Here is the source code for destination dashboard { "dataSources": { "ds_PxUp5pVa": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR ... See more...
Here is the source code for destination dashboard { "dataSources": { "ds_PxUp5pVa": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR \"*Batch*\" stepName=\"*\" scenario=\"$scenariosTok$\" status=FAILED\r\n| rex field=stepName \"^(?<Page>[^\\:]+)\"\r\n| rex field=stepName \"^\\'(?<Page>[^\\'\\:]+)\"\r\n| rex field=stepName \"\\:(?P<action>.*)\"\r\n| search Page=\"$stepTok$\"\r\n| eval Page=upper(Page)\r\n| stats list(action) as Actions by Page,scenario,error_log\r\n| rename Page as \"Page(Step)\",scenario as Scenarios,error_log as \"Exceptions\"\r\n| table Page(Step),Scenarios,Actions,Exceptions", "queryParameters": {} }, "name": "APP-xxx_THAAProperxxx_regressionactions" }, "ds_GTBvsceW": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR \"*Batch*\" stepName=\"*\" status=FAILED\r\n| rex field=stepName \"^(?<Page>[^\\:]+)\"\r\n| rex field=stepName \"^\\'(?<Page>[^\\'\\:]+)\"\r\n| search scenario=\"$scenariosTok$\" \r\n| stats count by Page" }, "name": "steps" }, "ds_0peRb3iY": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR \"*Batch*\" stepName=\"*\" scenario=\"*\" status=FAILED\r\n| rex field=stepName \"^(?<Page>[^\\:]+)\"\r\n| rex field=stepName \"^\\'(?<Page>[^\\'\\:]+)\"\r\n| search Page=\"$stepTok$\"\r\n| stats count by scenario", "queryParameters": {} }, "name": "Scenarios" } }, "visualizations": { "viz_Qido8tOl": { "type": "splunk.table", "options": { "count": 100, "dataOverlayMode": "none", "drilldown": "none", "showInternalFields": false, "backgroundColor": "#FAF9F6", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)", "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)", "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" } }, "dataSources": { "primary": "ds_PxUp5pVa" }, "title": "Regression Results with Actions & Error Details" }, "viz_wtfZ8Urm": { "type": "splunk.markdown", "options": { "markdown": "***xxx***", "fontColor": "#FAF9F6", "fontSize": "custom", "customFontSize": 34, "fontFamily": "Arial" } }, "viz_wkircOE3": { "type": "splunk.rectangle", "options": { "fillColor": "#FAF9F6", "strokeColor": "#000000" } }, "viz_KR6lYV6G": { "type": "splunk.rectangle", "options": { "fillColor": "#FAF9F6" } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-7d@h,now" }, "title": "Global Time Range" }, "input_7pdCBCBD": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "defaultValue": "*", "token": "scenariosTok" }, "title": "Scenarios", "type": "input.dropdown", "dataSources": { "primary": "ds_0peRb3iY" }, "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [ [ "All" ], [ "*" ] ], "label": ">primary | seriesByName(\"scenario\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"scenario\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_mv3itLP9": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "defaultValue": "*", "token": "stepTok" }, "title": "Page(Steps)", "type": "input.dropdown", "dataSources": { "primary": "ds_GTBvsceW" }, "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [ [ "All" ], [ "*" ] ], "label": ">primary | seriesByName(\"Page\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"Page\") | renameSeries(\"value\") | formatByType(formattedConfig)" } } }, "layout": { "type": "absolute", "options": { "width": 2000, "height": 2500, "display": "auto", "backgroundColor": "#294e70" }, "structure": [ { "item": "viz_wkircOE3", "type": "block", "position": { "x": 10, "y": 50, "w": 1980, "h": 1810 } }, { "item": "viz_wtfZ8Urm", "type": "block", "position": { "x": 10, "y": 10, "w": 1580, "h": 50 } }, { "item": "viz_KR6lYV6G", "type": "block", "position": { "x": 20, "y": 60, "w": 1950, "h": 510 } }, { "item": "viz_Qido8tOl", "type": "block", "position": { "x": 30, "y": 70, "w": 1930, "h": 490 } } ], "globalInputs": [ "input_global_trp", "input_7pdCBCBD", "input_mv3itLP9" ] }, "title": "xxx", "description": "", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" }, "refreshType": "delay", "refresh": "5m" } } } } }
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy... See more...
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy forwarder to receive the logs, store the logs in a indexer, and same heavy forwarder will clone the sourcetype to forward the cloned one into a distant heavy forward, that I don't managed. Here is my config : [inputs.conf] [udp://22210] index = my_logs_indexer sourcetype = log_sourcetype disabled = false This works pretty well, and all logs are stored into my indexer Now will come the cloning part :  [props.conf] [log_sourcetype] TRANSFORMS-log_sourcetype-clone = log_sourcetype-clone [transforms.conf] [log_sourcetype-clone] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = distant_HF_formylogs [outputs.conf] => for cloned logs [tcpout:distant_HF_formylogs] server = ip_of_distant_HF:port sendCookedData = false This configuration is used for another use case, as sometimes I have had to anonymize some logs. However, for this particular use case, when I activate the cloning part, it stops the complete log flow for this use case, even on the local indexers. I didn't quite understand why, because I don't see the difference with my other use case, apart from the fact that the logs are UDP logs and not TCP. Am I missing something? Thanks a lot for your help
Here are the essential parts you should consider using | eval match=if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "RED") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match) Obviously... See more...
Here are the essential parts you should consider using | eval match=if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "RED") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match) Obviously, change the tableCellColourWithoutJS to be the id of your panel <panel depends="$stayhidden$"> <html> <style> #tableCellColourWithoutJS table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> </panel> <format type="color"> <colorPalette type="expression">case (match(value,"RED"), "#ff0000")</colorPalette> </format>
The timeframe you showed (which I removed) applies to the search to dynamically populate the dropdown. Since you don't have a search to populate the dropdown (as shown by your error message), you don... See more...
The timeframe you showed (which I removed) applies to the search to dynamically populate the dropdown. Since you don't have a search to populate the dropdown (as shown by your error message), you don't need the timeframe here. You can still use the time in other parts of your dashboard.
As you mentioned i tried the mvappend the fields and its showing both the values in the table .The thing i need to show only when if it is not matched then i need to show the colours. | eval match=... See more...
As you mentioned i tried the mvappend the fields and its showing both the values in the table .The thing i need to show only when if it is not matched then i need to show the colours. | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged, " ", if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "Not Match","RED")) | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match)
i want to filter based on Timeframe. so i cant remove that