All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello jconger, Thanks for the comment. My system admin did the registration and added the permissions during the original setup.  We're not getting any message trace data. I can ask him to double-ch... See more...
Hello jconger, Thanks for the comment. My system admin did the registration and added the permissions during the original setup.  We're not getting any message trace data. I can ask him to double-check the roles, but as far as I know, it was done properly. 
This source does not seem to match the visualisation you have shown earlier. Are you using a trellis of singles or not?
Hi @Nitesh.Kewat, You are a partner and should be on a paid account, not a trial license. Can you confirm if this is true or not?
Following this thread as well, as I have observed the same issue following an upgrade from 9.1.2 to 9.2.1
Hi, Sending reports in the email body through the AppDynamics API is indeed possible, but it requires the implementation of specific programming logic. I recently received a similar request from on... See more...
Hi, Sending reports in the email body through the AppDynamics API is indeed possible, but it requires the implementation of specific programming logic. I recently received a similar request from one of our customers, asking to send reports directly in the email body. To fulfill this requirement, I developed a Python script. This script utilizes the AppDynamics performance metrics API to fetch relevant data. With programming concepts, I structured this data into a table format and configured SMTP details for sending the email within the script. To automate the process, I set up a cron job to execute this script every 30 minutes. The customer specifically requested server performance metrics to be included in the email body. Please refer to the attached image illustrating how server performance metrics are displayed in the email body, complete with warning and critical indicators for when server performance crosses predefined thresholds. Google drive Link of detailing the coding logic : https://drive.google.com/file/d/1WpPnKtI38VlC7vF3aBHPUXpH3_YJhoK1/view?usp=sharing Thanks, Harshank Patil 
I am not sure what you are asking of me here - your original issue seems to have been solved by @gcusello 
Please explain what is meant by "it's not working".  That phrase does not provide any actionable information.  What are the current results and how do they differ from what you expect? Does the "oth... See more...
Please explain what is meant by "it's not working".  That phrase does not provide any actionable information.  What are the current results and how do they differ from what you expect? Does the "other_transforms_stanza" do anything to the data that might affect the "my-log" stanza? Have you used regex101.com to test the REGEX? The "^.*" construct at the beginning of the regex is meaningless.  Get rid of it.
Here is the source code for destination dashboard { "dataSources": { "ds_PxUp5pVa": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR ... See more...
Here is the source code for destination dashboard { "dataSources": { "ds_PxUp5pVa": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR \"*Batch*\" stepName=\"*\" scenario=\"$scenariosTok$\" status=FAILED\r\n| rex field=stepName \"^(?<Page>[^\\:]+)\"\r\n| rex field=stepName \"^\\'(?<Page>[^\\'\\:]+)\"\r\n| rex field=stepName \"\\:(?P<action>.*)\"\r\n| search Page=\"$stepTok$\"\r\n| eval Page=upper(Page)\r\n| stats list(action) as Actions by Page,scenario,error_log\r\n| rename Page as \"Page(Step)\",scenario as Scenarios,error_log as \"Exceptions\"\r\n| table Page(Step),Scenarios,Actions,Exceptions", "queryParameters": {} }, "name": "APP-xxx_THAAProperxxx_regressionactions" }, "ds_GTBvsceW": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR \"*Batch*\" stepName=\"*\" status=FAILED\r\n| rex field=stepName \"^(?<Page>[^\\:]+)\"\r\n| rex field=stepName \"^\\'(?<Page>[^\\'\\:]+)\"\r\n| search scenario=\"$scenariosTok$\" \r\n| stats count by Page" }, "name": "steps" }, "ds_0peRb3iY": { "type": "ds.search", "options": { "query": "index=\"xxx\" appID=\"APP-xxx\" environment=xxx tags=\"*Parm*\" OR \"*Batch*\" stepName=\"*\" scenario=\"*\" status=FAILED\r\n| rex field=stepName \"^(?<Page>[^\\:]+)\"\r\n| rex field=stepName \"^\\'(?<Page>[^\\'\\:]+)\"\r\n| search Page=\"$stepTok$\"\r\n| stats count by scenario", "queryParameters": {} }, "name": "Scenarios" } }, "visualizations": { "viz_Qido8tOl": { "type": "splunk.table", "options": { "count": 100, "dataOverlayMode": "none", "drilldown": "none", "showInternalFields": false, "backgroundColor": "#FAF9F6", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)", "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)", "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" } }, "dataSources": { "primary": "ds_PxUp5pVa" }, "title": "Regression Results with Actions & Error Details" }, "viz_wtfZ8Urm": { "type": "splunk.markdown", "options": { "markdown": "***xxx***", "fontColor": "#FAF9F6", "fontSize": "custom", "customFontSize": 34, "fontFamily": "Arial" } }, "viz_wkircOE3": { "type": "splunk.rectangle", "options": { "fillColor": "#FAF9F6", "strokeColor": "#000000" } }, "viz_KR6lYV6G": { "type": "splunk.rectangle", "options": { "fillColor": "#FAF9F6" } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-7d@h,now" }, "title": "Global Time Range" }, "input_7pdCBCBD": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "defaultValue": "*", "token": "scenariosTok" }, "title": "Scenarios", "type": "input.dropdown", "dataSources": { "primary": "ds_0peRb3iY" }, "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [ [ "All" ], [ "*" ] ], "label": ">primary | seriesByName(\"scenario\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"scenario\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_mv3itLP9": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "defaultValue": "*", "token": "stepTok" }, "title": "Page(Steps)", "type": "input.dropdown", "dataSources": { "primary": "ds_GTBvsceW" }, "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [ [ "All" ], [ "*" ] ], "label": ">primary | seriesByName(\"Page\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"Page\") | renameSeries(\"value\") | formatByType(formattedConfig)" } } }, "layout": { "type": "absolute", "options": { "width": 2000, "height": 2500, "display": "auto", "backgroundColor": "#294e70" }, "structure": [ { "item": "viz_wkircOE3", "type": "block", "position": { "x": 10, "y": 50, "w": 1980, "h": 1810 } }, { "item": "viz_wtfZ8Urm", "type": "block", "position": { "x": 10, "y": 10, "w": 1580, "h": 50 } }, { "item": "viz_KR6lYV6G", "type": "block", "position": { "x": 20, "y": 60, "w": 1950, "h": 510 } }, { "item": "viz_Qido8tOl", "type": "block", "position": { "x": 30, "y": 70, "w": 1930, "h": 490 } } ], "globalInputs": [ "input_global_trp", "input_7pdCBCBD", "input_mv3itLP9" ] }, "title": "xxx", "description": "", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" }, "refreshType": "delay", "refresh": "5m" } } } } }
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy... See more...
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy forwarder to receive the logs, store the logs in a indexer, and same heavy forwarder will clone the sourcetype to forward the cloned one into a distant heavy forward, that I don't managed. Here is my config : [inputs.conf] [udp://22210] index = my_logs_indexer sourcetype = log_sourcetype disabled = false This works pretty well, and all logs are stored into my indexer Now will come the cloning part :  [props.conf] [log_sourcetype] TRANSFORMS-log_sourcetype-clone = log_sourcetype-clone [transforms.conf] [log_sourcetype-clone] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = distant_HF_formylogs [outputs.conf] => for cloned logs [tcpout:distant_HF_formylogs] server = ip_of_distant_HF:port sendCookedData = false This configuration is used for another use case, as sometimes I have had to anonymize some logs. However, for this particular use case, when I activate the cloning part, it stops the complete log flow for this use case, even on the local indexers. I didn't quite understand why, because I don't see the difference with my other use case, apart from the fact that the logs are UDP logs and not TCP. Am I missing something? Thanks a lot for your help
Here are the essential parts you should consider using | eval match=if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "RED") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match) Obviously... See more...
Here are the essential parts you should consider using | eval match=if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "RED") | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match) Obviously, change the tableCellColourWithoutJS to be the id of your panel <panel depends="$stayhidden$"> <html> <style> #tableCellColourWithoutJS table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> </panel> <format type="color"> <colorPalette type="expression">case (match(value,"RED"), "#ff0000")</colorPalette> </format>
The timeframe you showed (which I removed) applies to the search to dynamically populate the dropdown. Since you don't have a search to populate the dropdown (as shown by your error message), you don... See more...
The timeframe you showed (which I removed) applies to the search to dynamically populate the dropdown. Since you don't have a search to populate the dropdown (as shown by your error message), you don't need the timeframe here. You can still use the time in other parts of your dashboard.
As you mentioned i tried the mvappend the fields and its showing both the values in the table .The thing i need to show only when if it is not matched then i need to show the colours. | eval match=... See more...
As you mentioned i tried the mvappend the fields and its showing both the values in the table .The thing i need to show only when if it is not matched then i need to show the colours. | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged, " ", if(SourceFileDTLCount!=TotalAPGLRecordsCountStaged, "Not Match","RED")) | eval SourceFileDTLCount=mvappend(SourceFileDTLCount,match)
i want to filter based on Timeframe. so i cant remove that
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Obse... See more...
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Observability dashboard, unfortunately, I don't seem to be able to drill down to any specific page using the dashboard. from what research I've done it seems like I may have to manually add in thousands of RUM URL groupings to drill down further but I have a feeling that that shouldn't be correct?
@gcusello Yeah, understood and did the same thankyou. @ITWhisperer  any idea, need help here So now i ingested the csv file, from this i am getting the  index=foo host=nx7503 source=C:/*/mkd.csv... See more...
@gcusello Yeah, understood and did the same thankyou. @ITWhisperer  any idea, need help here So now i ingested the csv file, from this i am getting the  index=foo host=nx7503 source=C:/*/mkd.csv Fields: Subscription Resource Key Vault Secret Expiration Date Months CSV file: Subscription  Resource  Key Vault  Secret  Expiration Date  Months BoB-foo  Dicore-automat  Dicore-automat-keycore Di core-tuubsp1sct  2022-07-28 -21 BoB-foo  Dicore-automat  Dicore-automat-keycore  Dicore-stor1scrt  2022-07-28 -21 BoB-foo  G01462-mgmt-foo  G86413-vaultcore  G86413-secret-foo  2022-09-01 -20 And from the lookup(foo.csv) Lookup: foo.csv Application environment appOwner Caliber Dicore - TCG foo@gmail.com Keygroup G01462 - QA goo@gmail.com Keygroup G01462 - SIT boo@gmail.com   when the "Expiration Date" match the "Resource" and "environment" trigger the alert and send mail to the respective emails(appOwner), how to get this.
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will ... See more...
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will be present? How can I get these logs to Splunk using Add on? Thank You!  
Hi Paul, Thankyou for your response,i have checked the link that you've given. I have tried with that, but that is not working for me. For eg: I want to onboard the data where it has "some... See more...
Hi Paul, Thankyou for your response,i have checked the link that you've given. I have tried with that, but that is not working for me. For eg: I want to onboard the data where it has "some message" in the events and rest to discard in the below event. Could you please suggest any solution for this 2023-01-31 10:39:58 message1 2023-01-31 10:40:01 message2 2023-01-31 10:40:08 message3 2023-01-31 10:40:08 message4 2023-01-31 10:40:00 some message 2023-01-31 10:40:01 some message in between 2023-01-31 10:40:01 some message in between 2023-01-31 10:40:01 some message in between 2023-01-31 10:40:01 message5 2023-01-31 10:40:01 message5
Please share the source code of your dashboard
The docs at https://docs.splunk.com/Documentation/Splunk/latest/Data/Applytimezoneoffsetstotimestamps#How_Splunk_software_determines_time_zones specify how Splunk determines the time zone for an even... See more...
The docs at https://docs.splunk.com/Documentation/Splunk/latest/Data/Applytimezoneoffsetstotimestamps#How_Splunk_software_determines_time_zones specify how Splunk determines the time zone for an event.  Note that the UI setting is not included.