All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  I'm trying to instrument my .NET application for Splunk Observability Cloud. I'm using this package for that and it's working. I can see traces coming in. However in the Database Query Performa... See more...
Hi,  I'm trying to instrument my .NET application for Splunk Observability Cloud. I'm using this package for that and it's working. I can see traces coming in. However in the Database Query Performance section, I can only see the queries executed by hangfire (which we use to manage background jobs) in the application. Other DB queries are not captured. We are using a PostgreSQL database hosted in Amazon RDS which is compatible. The SQL Database MetricSets is also active. How can I make sure all the DB queries are captured? .
how do i determine when to use index=botsv1 ?  
"Name or service not known" means that you've typed in some address that your SH cannot properly resolve. Either you've made some typo or you have problems with DNS.
Your issue may be to do with what you do if the user has not selected a value for either token. A dashboard would normal wait for the user to make a selection. Handling tokens is easier in Classic Si... See more...
Your issue may be to do with what you do if the user has not selected a value for either token. A dashboard would normal wait for the user to make a selection. Handling tokens is easier in Classic SimpleXML dashboards than currently available in Studio. Is this an option for you?
Hello @Easwar.C  Could you please confirm whether the tools.jar is in the correct path which should be your JAVA_HOME in the OS? I tested with JDK8 and tomcat and I could see object instance trac... See more...
Hello @Easwar.C  Could you please confirm whether the tools.jar is in the correct path which should be your JAVA_HOME in the OS? I tested with JDK8 and tomcat and I could see object instance tracking in my controller normally. I attached the screenshots of my configuration and result as references. On a side note, you can open a case with AppD support, please kindly look through the article to raise a case if necessary. https://community.appdynamics.com/t5/Knowledge-Base/AppDynamics-is-migrating-our-Support-case-handling-system-to/ta-p/53966/redirect_from_archived_page/true How do I open a case with AppDynamics Support?   First, make sure that you have access to Cisco SCM by having a valid Cisco.com account. If you were part of the migration this should have been done automatically for you. If still you need to request a Cisco.com account, please refer to the earlier communication about User Identity changes found here.  Make your way to the AppDynamics portal on appdynamics.com/support. When you log in to the AppDynamics portal you will be automatically redirected to Cisco SCM.  Hope this helps. Best regards, Xiangning
Hi Folks, I was working on Splunk  webhook however I'm getting below error while sending payload though Webhook also the webhook url has been allowed aleardy. action=webhook STDERR - Error sending ... See more...
Hi Folks, I was working on Splunk  webhook however I'm getting below error while sending payload though Webhook also the webhook url has been allowed aleardy. action=webhook STDERR - Error sending webhook request: <urlopen error [Errno -2] Name or service not known> Does anyone have any ideas on how to resolve this issue?  
Hi @rammeduru  Currently There is no app available for Dashboards in Splunk Base.   you might try creating dashboards for yourself  
Hi @sgro777 , did you tried with: eventtype=builder (user_id IN ($id$) OR user_mail in $email$) | eval ..... ? Ciao. Giuseppe
In addition to mistaken path notation ({} for array) as @PickleRick , you also do not need an extra spath if all you want is a multivalued field named commit_id.  Splunk should have taken care of ext... See more...
In addition to mistaken path notation ({} for array) as @PickleRick , you also do not need an extra spath if all you want is a multivalued field named commit_id.  Splunk should have taken care of extraction.   index=XXXXX source="http:github-dev-token" eventtype="GitHub::Push" sourcetype="json_ae_git-webhook" | rename commits{}.id as commit_id   This is a full emulation   | makeresults format=json data="[{ \"ref\":\"refs/heads/Dev\", \"before\":\"d53e9b3cb6cde4253e05019295a840d394a7bcb0\", \"after\":\"34c07bcbf557413cf42b601c1794c87db8c321d1\", \"commits\":[ { \"id\":\"a5c816a817d06e592d2b70cd8a088d1519f2d720\", \"tree_id\":\"15e930e14d4c62aae47a3c02c47eb24c65d11807\", \"distinct\":false, \"message\":\"rrrrrrrrrrrrrrrrrrrrrr\", \"timestamp\":\"2024-08-12T12:00:04-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/aaaaaaaaaaaa\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[\"asdafasdad.json\"]}, { \"id\":\"a3b3b6f728ccc0eb9113e7db723fbfc4ad220882\", \"tree_id\":\"3586aeb0a33dc5e236cb266c948f83ff01320a9a\", \"distinct\":false, \"message\":\"xxxxxxxxxxxxxxxxxxx\", \"timestamp\":\"2024-08-12T12:05:40-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/a3b3b6f728ccc0eb9113e7db723fbfc4ad220...\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[ \"sddddddf.json\"]}, { \"id\":\"bdcd242d6854365ddfeae6b4f86cf7bc1766e028\", \"tree_id\":\"8286c537f7dee57395f44875ddb8b2cdb7dd48b2\", \"distinct\":false, \"message\":\"Updating pipeline: pl_gwp_file_landing_check. Adding Sylvan Performance\", \"timestamp\":\"2024-08-12T12:06:10-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/bdcd242d6854365ddfeae6b4f86cf7bc1766e...\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[ \"asadwefvdx.json\"]}, { \"id\":\"108ebd4ff8ae9dd70e669e2ca49e293684d5c37a\", \"tree_id\":\"5a6d71393611718b8576f8a63cdd34ce619f17dd\", \"distinct\":false, \"message\":\"asdrwerwq\", \"timestamp\":\"2024-08-12T10:09:33-07:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/108ebd4ff8ae9dd70e669e2ca49e293684d5c...\", \"author\":{ \"name\":\"dfsd\", \"email\":\"l.llllllllllll@aaaaaa.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"lllllllllllll\", \"email\":\"l.llllllllllll@abc.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[\"A.json\",\"A.json\",\"A.json\"]},{ \"id\":\"34c07bcbf557413cf42b601c1794c87db8c321d1\", \"tree_id\":\"5a6d71393611718b8576f8a63cdd34ce619f17dd\", \"distinct\":true, \"message\":\"asadasd\", \"timestamp\":\"2024-08-12T13:32:45-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/34c07bcbf557413cf42b601c1794c87db8c32...\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"GitasdjwqaikHubasdqw\", \"email\":\"noreply@gitskcaskadahuqwdqbqwdqaw.com\", \"username\":\"wdkcszjkcsebwdqwdfqwdawsldqodqw\"}, \"added\":[], \"removed\":[], \"modified\":[ \"a.json\", \"A1.json\", \"A1.json\"]}], \"head_commit\":{ \"id\":\"34c07bcbf557413cf42b601c1794c87db8c321d1\", \"tree_id\":\"5a6d71393611718b8576f8a63cdd34ce619f17dd\", \"distinct\":true, \"message\":\"sadwad from xxxxxxxxxxxxxxx/IH-5942-Pipeline-Change\n\nIh 5asdsazdapeline change\", \"timestamp\":\"2024-08-12T13:32:45-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/3weweeeeeeeee\", \"author\":{ \"name\":\"askjas\", \"email\":\"101218171+asfsfgwsrsd@users.noreply.github.com\", \"username\":\"asdwasdcqwasfdc-qwgbhvcfawdqxaiwdaszxc\" }, \"committer\":{ \"name\":\"GsdzvcweditHuscwsab\", \"email\":\"noreply@gitasdcwedhub.com\", \"username\":\"wefczeb-fwefvdszlow\"}, \"added\":[], \"removed\":[], \"modified\":[\"zzzzzzz.json\",\"Azzzzz.json\",\"zzzz.json\" ]}}]" | spath ``` the above emulates index=XXXXX source="http:github-dev-token" eventtype="GitHub::Push" sourcetype="json_ae_git-webhook" ``` | rename commits{}.id as commit_id | table commit_id   The output is commit_id a5c816a817d06e592d2b70cd8a088d1519f2d720 a3b3b6f728ccc0eb9113e7db723fbfc4ad220882 bdcd242d6854365ddfeae6b4f86cf7bc1766e028 108ebd4ff8ae9dd70e669e2ca49e293684d5c37a 34c07bcbf557413cf42b601c1794c87db8c321d1
 Yes, the search is semantically equivalent.  That was my point one in previous comment.  If your index search already has a field named ip, there is no need to run search command in a second pipe (c... See more...
 Yes, the search is semantically equivalent.  That was my point one in previous comment.  If your index search already has a field named ip, there is no need to run search command in a second pipe (command).  You also do not need those source evaluation because join doesn't care about them. | inputlookup host.csv | rename ip_address as ip | join max=0 type=left ip [ search index=risk ip="10.1.0.0/16" | fields ip risk score contact ] | join max=0 type=left ip [ search index=risk ip="10.2.0.0/16" | fields ip risk score contact ] | join max=0 type=left ip [ search index=risk ip="10.3.0.0/16" | fields ip risk score contact ] | table ip, host, risk, score, contact  I do not see any dedup in your mock code but I assume that you have customization that is not shown.  
I'm very new to Splunk.  I have two tokens as input to a dashboard and want to change a query based on which one is entered.   My base query (with no dashboard)  eventtype=builder user_id IN (<v... See more...
I'm very new to Splunk.  I have two tokens as input to a dashboard and want to change a query based on which one is entered.   My base query (with no dashboard)  eventtype=builder user_id IN (<value1>, <value2>, etc.) | eval ..... I created a dashboard and want to use tokens for the input.   token1=$id$ token2=$email$ If the token1 has data, I want to execute eventtype=builder user_id IN ($id$) | eval....  otherwise, I want to execute  eventtype=builder user_mail in $email$ | eval .....  
Well, you can try to make a compound regex containing some alternative branches. Also you seem to have some XML-like structure there. If it's a valid XML, why not just parse the XML into fields and ... See more...
Well, you can try to make a compound regex containing some alternative branches. Also you seem to have some XML-like structure there. If it's a valid XML, why not just parse the XML into fields and check for existence of specific fields? I'm also not sure about the rest of the search but honestly speaking it's too late and I'm too tired at the moment to look into it.
Do you need to do this in SPL during search or are you trying to define a field extraction? Anyway, the usual answer to "regex" and "json" in one sentence is usually "don't fiddle with regex on stru... See more...
Do you need to do this in SPL during search or are you trying to define a field extraction? Anyway, the usual answer to "regex" and "json" in one sentence is usually "don't fiddle with regex on structured data". WIth SPL it's relatively easy - extract your fields either with KV_MODE=json or explicitly using spath and do | rex input=attributes.Comment__c "with (?<failures_no>\d+) failures" With field extraction it might not be that easy because transforms which you could call on a json-extracted field are called before autoextractions. So you might actually need to define extraction based on raw data with that regex but that will be unintuitive to maintain since your data seems to be a well-formed json and  with json you'd actually expect the explicitly named fields, not some funky stuff pulled from somewhere from the middle.
requirements: find and save sensitive data fields from logs Save log snippet around sensitive data field Remove duplicates for mule apps and sensitive data field Create table showing mule app nam... See more...
requirements: find and save sensitive data fields from logs Save log snippet around sensitive data field Remove duplicates for mule apps and sensitive data field Create table showing mule app name, sensitive data, and log snippet is there a way to improve the search query so I don't have to duplicate the rex commands every time I need to add a new sensitive data value? (app_name is an existing custom field) index="prod"  |rex field=_raw (?i)(?<birthDate>(birthDate))|rex field=_raw (?i)(?<dob>(dob)) |rex field=_raw (?i)(?<birthday>(birthday)) |rex field=_raw (?i)(?<birthDateLog>(birthDate).*?\w\W) |rex field=_raw (?i)(?<dobLog>(dob).*?\w\W) |rex field=_raw (?i)(?<birthdayLog>(birthday).*?\w\W)|eval SENSITIVE_DATA= mvappend(birthDate,dob,birthday) |eval SENSITIVE_DATA_LOWER=lower(SENSITIVE_DATA) | dedup app_name SENSITIVE_DATA_LOWER |eval SENSITIVE_DATA_LOG=mvappend(birthDateLog,dobLog,birthdayLog) |stats list(SENSITIVE_DATA_LOG) as SENSITIVE_DATA_LOG list(SENSITIVE_DATA_LOWER) as SENSITIVE_DATA_LOWER by app_name | table app_name SENSITIVE_DATA_LOWER SENSITIVE_DATA_LOG   example output: app_name SENSITIVE_DATA_LOWER SENSITIVE_DATA_LOG s-api dob birthdate dob: 01/01/2024 birthdate:  09-09-1999 p-api birthday birthday: August 23, 2024
Assuming you wanted to say path=commits{}.id it seems to work for me. | makeresults | eval _raw="{ \"ref\":\"refs/heads/Dev\", \"before\":\"d53e9b3cb6cde4253e05019295a840d394a7bcb0\", \"after... See more...
Assuming you wanted to say path=commits{}.id it seems to work for me. | makeresults | eval _raw="{ \"ref\":\"refs/heads/Dev\", \"before\":\"d53e9b3cb6cde4253e05019295a840d394a7bcb0\", \"after\":\"34c07bcbf557413cf42b601c1794c87db8c321d1\", \"commits\":[ { \"id\":\"a5c816a817d06e592d2b70cd8a088d1519f2d720\", \"tree_id\":\"15e930e14d4c62aae47a3c02c47eb24c65d11807\", \"distinct\":false, \"message\":\"rrrrrrrrrrrrrrrrrrrrrr\", \"timestamp\":\"2024-08-12T12:00:04-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/aaaaaaaaaaaa\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[\"asdafasdad.json\"]}, { \"id\":\"a3b3b6f728ccc0eb9113e7db723fbfc4ad220882\", \"tree_id\":\"3586aeb0a33dc5e236cb266c948f83ff01320a9a\", \"distinct\":false, \"message\":\"xxxxxxxxxxxxxxxxxxx\", \"timestamp\":\"2024-08-12T12:05:40-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/a3b3b6f728ccc0eb9113e7db723fbfc4ad220...\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[ \"sddddddf.json\"]}, { \"id\":\"bdcd242d6854365ddfeae6b4f86cf7bc1766e028\", \"tree_id\":\"8286c537f7dee57395f44875ddb8b2cdb7dd48b2\", \"distinct\":false, \"message\":\"Updating pipeline: pl_gwp_file_landing_check. Adding Sylvan Performance\", \"timestamp\":\"2024-08-12T12:06:10-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/bdcd242d6854365ddfeae6b4f86cf7bc1766e...\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[ \"asadwefvdx.json\"]}, { \"id\":\"108ebd4ff8ae9dd70e669e2ca49e293684d5c37a\", \"tree_id\":\"5a6d71393611718b8576f8a63cdd34ce619f17dd\", \"distinct\":false, \"message\":\"asdrwerwq\", \"timestamp\":\"2024-08-12T10:09:33-07:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/108ebd4ff8ae9dd70e669e2ca49e293684d5c...\", \"author\":{ \"name\":\"dfsd\", \"email\":\"l.llllllllllll@aaaaaa.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"lllllllllllll\", \"email\":\"l.llllllllllll@abc.com\", \"username\":\"aaaaaa\"}, \"added\":[], \"removed\":[], \"modified\":[\"A.json\",\"A.json\",\"A.json\"]},{ \"id\":\"34c07bcbf557413cf42b601c1794c87db8c321d1\", \"tree_id\":\"5a6d71393611718b8576f8a63cdd34ce619f17dd\", \"distinct\":true, \"message\":\"asadasd\", \"timestamp\":\"2024-08-12T13:32:45-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/34c07bcbf557413cf42b601c1794c87db8c32...\", \"author\":{ \"name\":\"aaaaaa aaaaaa\", \"email\":\"101218171+aaaaaa@users.noreply.github.com\", \"username\":\"aaaaaa\"}, \"committer\":{ \"name\":\"GitasdjwqaikHubasdqw\", \"email\":\"noreply@gitskcaskadahuqwdqbqwdqaw.com\", \"username\":\"wdkcszjkcsebwdqwdfqwdawsldqodqw\"}, \"added\":[], \"removed\":[], \"modified\":[ \"a.json\", \"A1.json\", \"A1.json\"]}], \"head_commit\":{ \"id\":\"34c07bcbf557413cf42b601c1794c87db8c321d1\", \"tree_id\":\"5a6d71393611718b8576f8a63cdd34ce619f17dd\", \"distinct\":true, \"message\":\"sadwad from xxxxxxxxxxxxxxx/IH-5942-Pipeline-Change\n\nIh 5asdsazdapeline change\", \"timestamp\":\"2024-08-12T13:32:45-05:00\", \"url\":\"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/3weweeeeeeeee\", \"author\":{ \"name\":\"askjas\", \"email\":\"101218171+asfsfgwsrsd@users.noreply.github.com\", \"username\":\"asdwasdcqwasfdc-qwgbhvcfawdqxaiwdaszxc\" }, \"committer\":{ \"name\":\"GsdzvcweditHuscwsab\", \"email\":\"noreply@gitasdcwedhub.com\", \"username\":\"wefczeb-fwefvdszlow\"}, \"added\":[], \"removed\":[], \"modified\":[\"zzzzzzz.json\",\"Azzzzz.json\",\"zzzz.json\" ]}}" | spath output=commit_id path=commits{}.id | table commit_id  shows 5 values Splunk 9.3.0
Hi @yuanliu, Thank you again for your analysis and suggestion. The environment where I am working is very restrictive about making changes to limits.conf due to a resource problem. The index i... See more...
Hi @yuanliu, Thank you again for your analysis and suggestion. The environment where I am working is very restrictive about making changes to limits.conf due to a resource problem. The index in the real data set I am working on has more than 1 million rows in a1 day time frame, I got it down to 150k after filtering out with specific subnets, fields, and dedups.  If this is the case, Is splitting the sub search for the join the only way to it?  Does the following search for splitting look correct? Please let me know if you have a better idea or workaround.  The real data has a lot more than 3 IP subnets     Thank you again. | inputlookup host.csv | rename ip_address as ip | eval source="csv" | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.1.0.0/16" | eval source="risk1" ] | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.2.0.0/16" | eval source="risk2" ] | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.3.0.0/16" | eval source="risk3" ] | table ip, host, risk, score, contact
We have json logs, from the below logs we need to get the rex for the failures count which is mentioned in the logs like (7 failures) We need rex to get the count for failures  count. {"attributes"... See more...
We have json logs, from the below logs we need to get the rex for the failures count which is mentioned in the logs like (7 failures) We need rex to get the count for failures  count. {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 3 batches with 3 failures.3", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 4 batches with 4 failures.4", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 5 batches with 5 failures.5", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 7 batches with 7 failures.7", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 10 batches with 10 failures.10", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null}  
source=lastlog corresponds with the source setting in the inputs.conf for the lastlog.sh script so this one checks out. It doesn't have anything to do with lastlog file. Check your splunk btool pro... See more...
source=lastlog corresponds with the source setting in the inputs.conf for the lastlog.sh script so this one checks out. It doesn't have anything to do with lastlog file. Check your splunk btool props list lastlog --debug | grep DATETIME_CONFIG output. If it shows anything else than CURRENT as a value, it means you're overwriting this in the shown file.  
Also be sure that you're not using any TLS-inspection solution if it's on-prem.
Hi, It's possible that the database you're using isn't supported for Database Query Performance. I suggest checking the supported list here: https://docs.splunk.com/observability/en/apm/db-query-pe... See more...
Hi, It's possible that the database you're using isn't supported for Database Query Performance. I suggest checking the supported list here: https://docs.splunk.com/observability/en/apm/db-query-perf/db-perf-reference.html#supported-dbs Also, you could check your APM MetricSets in settings->APM MetricSets and make sure that Database Query Performance is enabled and active.