All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@codebuilder  i got the same message, but in the splunk i don't find any logs. What is the problem ?
hank you it worked.
I'm going to try that. I'm seeing now the Windows App has a default transformation called "WinEventXmlHostOverride" that will override the host with the "Computer" Xml value. Do you see any negative ... See more...
I'm going to try that. I'm seeing now the Windows App has a default transformation called "WinEventXmlHostOverride" that will override the host with the "Computer" Xml value. Do you see any negative from doing this at search-time rather than index-time?
Or better, do not use regex because the URI has an inherent structure/convention that many APIs adhere to. (See, e.g., Re: How do I modify my rex command to remove direc...)  What you ask is the seco... See more...
Or better, do not use regex because the URI has an inherent structure/convention that many APIs adhere to. (See, e.g., Re: How do I modify my rex command to remove direc...)  What you ask is the second to last segment of HTTP_PATH variable in CGI standard. | eval actionID = mvindex(split(httpURL, "/"), -2) Semantic code is easier to maintain and in this case, potentially cheaper than regex. 
Hi Splunk Community, I need to create an alert that only gets triggered if two conditions are met. As a matter of fact, the conditions are layered: Search results are >3 in a 5-minute interval... See more...
Hi Splunk Community, I need to create an alert that only gets triggered if two conditions are met. As a matter of fact, the conditions are layered: Search results are >3 in a 5-minute interval. Condition 1 is true 3 times over a 15-minute interval. I thought I would create 3 sub-searches within the search and output the result in a "counter" and I would, then, run a search to identify if the "counter" values are >3: index=foo mal_code="foo" source="foo.log" | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-5m@m latest=now | stats count as event_count1 | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-10m@m latest=-5m@m | stats count as event_count2 | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-15m@m latest=-10m@m | stats count as event_count3 | search event_count*>0 | stats count as result I am not sure my time modifiers are working correctly, but I am not getting the results I expected. I would appreciate if I could get some advice on how to go about this.
I am trying to create a props.conf to pass a custom timestamp. To do so I wanted to upload data and use the set source type page to configure timestamp parameters and then copy the props.conf to clip... See more...
I am trying to create a props.conf to pass a custom timestamp. To do so I wanted to upload data and use the set source type page to configure timestamp parameters and then copy the props.conf to clipboard. However, the preview on this page does not update when I click "Apply Settings". The preview only changes when I select a new source type from the dropdown next to "Save As", changing anything in "Event Breaks", "Timestamp", or "Advanced" does nothing. Some thing to note is there is a red exclamation point in the top left saying "Can only preview uploaded files", unsure what this means. When I do save the data and search it, it DOES look like the source type changes I made took effect, but this really isn't a feasible way to test and configure my parameters. Any way to get this visible in "Add Data"s preview?
Need assistance with this, have installed the app, pointed to the address of the on prem server we have housing bluecat, ensured that account we created can login, has api access. I have pointed it t... See more...
Need assistance with this, have installed the app, pointed to the address of the on prem server we have housing bluecat, ensured that account we created can login, has api access. I have pointed it there, given credentials but nothing is being pulled from bluecat into splunk. A little assistance, I read the README files, and didnt help much.    Thanks, Justin
Can you clarify your definition of "vulnerabilities in Splunk"?  If it is a known vulnerability that affects Splunk Enterprise, for example, Splunk will issue an update.  Your "fix" is to install tha... See more...
Can you clarify your definition of "vulnerabilities in Splunk"?  If it is a known vulnerability that affects Splunk Enterprise, for example, Splunk will issue an update.  Your "fix" is to install that update. (This happened several times in the past half year.  It also happened with the 9.0 release.)  If the known vulnerability affects Splunk Cloud, the "fix" is to wait for Splunk to update the cloud. If you are talking about vulnerabilities in your own applications identified by a specific Splunk product such as Splunk Security, each vulnerability will have its own remediation method.  There is no way to generalize. (Although products like Splunk Security may give you specific hints, recommendations, even procedures.)
Altering host field is one of least desirable alterations but it has to be done from time to time.  In your case, you probably have to use a calculated field.
Hello, I'm attempting to change the sourcetype and host on a single event. The tricky part is I want the second transform based on the change from the first transform For Example, My data comes... See more...
Hello, I'm attempting to change the sourcetype and host on a single event. The tricky part is I want the second transform based on the change from the first transform For Example, My data comes in as   index=main host=heavy_forwarder sourcetype=aws:logbucket   I want the data to change to   index=main host=amazonfsx.host sourcetype=XmlWinEventLog   The catch is that I have other sourcetypes coming in as aws:logbucket and getting transformed to various other sourcetypes (cloudtrail, config, etc). On these events I do not want to run the regex to change the host value   If I have a props.conf file that states TRANSFORMS-modify_data = aws_fsx_sourcetype, aws_fsx_host And a transforms.conf of [aws_fsx_sourcetype] SOURCE_KEY = MetaData:Source REGEX = ^source::s3:\/\/fsxbucket\/.* FORMAT = sourcetype::XmlWinEventLog DEST_KEY = MetaData:Sourcetype [aws_fsx_host] REGEX = <Computer>([^.<]+).*?<\/Computer> FORMAT = host::$1 DEST_KEY = MetaData:Host   I'm worried this will have unexpected results on the other sourcetypes that aws:logucket has, like cloudtrail and config. If I break it out with two separate transforms, like this   TRANSFORMS-modify_data = aws_fsx_sourcetype TRANSFORMS-modify_data2 = aws_fsx_host   I'm worried the typing pipeline won't see the second transform. What is the most effective way to accomplish this?   Thanks, Nate
We are having difficulty getting exclusions of logs that have fields in Camelcase or have entries that have special characters related to OTEL logs. Fields without capitalization and/or special chara... See more...
We are having difficulty getting exclusions of logs that have fields in Camelcase or have entries that have special characters related to OTEL logs. Fields without capitalization and/or special character values are able to be parsed out, but not others. Here is an example log that we are looking at (attached as yaml and key portion).         filelog/kube-apiserver-audit-log: include: - /var/log/kubernetes/kube-apiserver.log include_file_name: false include_file_path: true operators: - id: extract-audit-group type: regex_parser regex: '\s*\"resourceGroup\"\s*\:\s*\"(?P<extracted_group>[^\"]+)\"\s*' - id: filter-group type: filter expr: 'attributes.extracted_beta == "batch"' - id: remove-extracted-group type: remove field: attributes.extracted_group - id: extract-audit-api type: regex_parser regex: '\"level\"\:\"(?P<extracted_audit_beta>[^\"]+)\"' - id: filter-api type: filter expr: 'attributes.extracted_audit_beta == "Metadata"' - id: remove-extracted-api type: remove field: attributes.extracted_api - id: extract-audit-verb type: regex_parser regex: '\"verb\"\:\"(?P<extracted_verb>[^\"]+)\"' - id: filter-verb type: filter expr: 'attributes.extracted_verb == "watch" || attributes.extracted_verb == "list"' - id: remove-extracted-verb type: remove field: attributes.extracted_verb The resourceGroup field is compared to something else and failing, verb and level are succeeding. Here is an example log that would be pulled in. {"apiVersion":"batch/v1","component":"sync-agent","eventType":"MODIFIED","kind":"CronJob","level":"info","msg":"sent event","name":"agentupdater-workload","namespace":"vmware-system-tmc","resourceGroup":"batch","resourceType":"cronjobs","resourceVersion":"v1","time":"2024-03-14T18:17:11Z"}
Hi, Here are a couple of thoughts to consider. If you want to run the OTel collector in its own container, are you using appropriate networking mode for your container? For example, port 4317 (as we... See more...
Hi, Here are a couple of thoughts to consider. If you want to run the OTel collector in its own container, are you using appropriate networking mode for your container? For example, port 4317 (as well as others) would need to bind to the host networking ports so that your other container running your application can refer to "localhost:4317". You may want to start the OTel collector container first and then try some simple tests on the host command line to make sure it's accessible (e.g., 'telnet localhost 4317' or 'nc -vz localhost 4317') https://lantern.splunk.com/Data_Descriptors/Docker/Setting_up_the_OpenTelemetry_Demo_in_Docker Also, I didn't see any mention of container orchestration (kubernetes). This might be a good use-case for kubernetes where you run your application container with kubernetes and the OTel collector can run in a side-car configuration. https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-kubernetes/kubernetes-config.html
Attempting to make changes in "Set Source Type" and pressing "Apply settings" never seems to make changes to my sample data preview. Im getting a red exclamation in the top left corner saying "Can... See more...
Attempting to make changes in "Set Source Type" and pressing "Apply settings" never seems to make changes to my sample data preview. Im getting a red exclamation in the top left corner saying "Can only preview uploaded files", could that be a problem?
Thank you,  This worked for what I asked for.  Group Policy runs every 90-120 minutes so this should return most PCs with errors without duplicating them.  We have about 1000 computers and seem to ha... See more...
Thank you,  This worked for what I asked for.  Group Policy runs every 90-120 minutes so this should return most PCs with errors without duplicating them.  We have about 1000 computers and seem to have about 100 with errors, so this will return about 100 results for the 90 min.  90 min is all I really need to search, maybe 120, but I chose 90.  I can dig into the data more after getting these quick results.  I did realize I probably need all results, not just errors if I Enter a PC, but I can work on that.  I think if I enter a PC, I want all EventIDs, and if I enter an EventID, I want all PCs with that EventID.   Thank you again.  This is working as asked.
Write a script that uses the REST API to pull a list of saved searches, filters on your name, then updates them with the new TTL.  See https://docs.splunk.com/Documentation/Splunk/9.2.0/RESTREF/RESTs... See more...
Write a script that uses the REST API to pull a list of saved searches, filters on your name, then updates them with the new TTL.  See https://docs.splunk.com/Documentation/Splunk/9.2.0/RESTREF/RESTsearch#saved.2Fsearches
Thank you @richgalloway for your reply! Since all of the monitoring searches are under my username, would you know a solution to set dispatch.ttl based on the username, not the search names? so all ... See more...
Thank you @richgalloway for your reply! Since all of the monitoring searches are under my username, would you know a solution to set dispatch.ttl based on the username, not the search names? so all searches under my username would have dispatch.ttl=3p .
Thank you @ITWhisperer  this saved my day.. I read so many posts and saw videos on how to change font size of column charts and your solution was spot on.
Hi @anil1219, if the structure of the URL is fixed, you could use  | rex "\/\w+\/\w+\/\w+\/(?<your_field>[^\/]+)" that you can test at https://regex101.com/r/K3yj0E/1 Cio. Giuseppe
Hi  @dataisbeautiful, at first don't use the where condition after the main search, this is a bad practice that make your search slower. Then, you should analyze why you have a delay: have you suff... See more...
Hi  @dataisbeautiful, at first don't use the where condition after the main search, this is a bad practice that make your search slower. Then, you should analyze why you have a delay: have you sufficient resources in your Indexers and Search Heads? If you have sufficient resources and If there's a delay in indexing You could eventually try to use, in real time,  the 60 seconds frome 70 seconds past and 10 seconds past: index=ind sourcetype=src (type=instrument) earliest=rt-70s latest=rt-10s temperature!="" | timechart span=1s values(temperature) Ciao. Giuseppe
Hello - Trying to create a query that will output additions to Azure security groups memberships. I am able to successfully output the information I need, but in the newValue field, it contains mult... See more...
Hello - Trying to create a query that will output additions to Azure security groups memberships. I am able to successfully output the information I need, but in the newValue field, it contains multiple different values. How do I omit the 'null' value and the security group IDs. I only want it to show that actual name of the security group. The way the logs are currently parsed, all of those values are in the same field - "properties.targetResources{}.modifiedProperties{}.newValue" Query:   index="azure-activity" | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.userPrincipalName", "properties.targetResources{}.modifiedProperties{}.newValue", operationName, _time    Output: