All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have some json data events that has multiple "date" fields. The date field I am looking to use as my timestamp comes at the end of every event and it appears that Splunk is using whichever date fie... See more...
I have some json data events that has multiple "date" fields. The date field I am looking to use as my timestamp comes at the end of every event and it appears that Splunk is using whichever date field it reads first. Is there a way to specify which date field to use? The fields are in different time formats and even though I am specifying the time format for epoch time, it still appears to be incorrectly reading the first timestamp. props.conf: [sourcetype] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true AUTO_KV_JSON=false CHARSET=UTF-8 INDEXED_EXTRACTIONS=json KV_MODE=none TRUNCATE=20000 category=Structured description=JavaScript Object Notation format. For more information, visit http://json.org/ disabled=false pulldown_type=true TIME_PREFIX="date":+ TIME_FORMAT=%s%3N MAX_TIMESTAMP_LOOKAHEAD=13 Data Sample: {"message”:”[messageType] This is a message",”type":"IntegrationLog","level":"WARN","details":{"_incomingData":{"_parsedData":{"hostNotificationNumber":"1","date":"2020-04-01”},”dateType”:”blah”,”integrationName”:”blahblahblah”,”incomingDataId”:”xxxxxxxxxxxxxxx”}},”date":1585775200775}
I have made an app with some dashboards. Not everyone who uses this app needs all menus. Is there a way I can set it up so a user could go to a configuration page and select specifically what menus ... See more...
I have made an app with some dashboards. Not everyone who uses this app needs all menus. Is there a way I can set it up so a user could go to a configuration page and select specifically what menus to show? Like a custom changeable navigation menu?
We use the Splunk Add-on for AWS and have multiple accounts that send their Cloudtrail logs to an S3 bucket in a specific account. The logs in the bucket are encrypted with a KMS key. Each account ha... See more...
We use the Splunk Add-on for AWS and have multiple accounts that send their Cloudtrail logs to an S3 bucket in a specific account. The logs in the bucket are encrypted with a KMS key. Each account has a Splunk user with the required S3, SQS and KMS permissions, the S3 bucket has a bucket policy allowing the users from each account full access to the bucket. We have another SQS based S3 input from an account which sends it's Cloudtrail to an S3 bucket in the same account and logs are not encrypted which works fine. When we look at the _internal logs for the inputs which are not working, we get bombarded with the following messages: 2020-05-14 14:50:34,692 level=CRITICAL pid=15774 tid=Thread-6 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_process:268 | start_time=1589314358 datainput="Stage-Cloudtrail", ttl=30 message_id="22ca88b4-3bc9-4931-9154-0ac84f80a062" created=1589467834.66 job_id=442dd56d-e988-4a4a-ac71-6eafbe24bf3d | message="An error occurred while processing the message." Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 256, in _process headers = self._download(record, cache, session) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 290, in _download return self._s3_agent.download(record, cache, session) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 418, in download return bucket.transfer(s3, key, fileobj, **condition) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/s3.py", line 73, in transfer headers = client.head_object(Bucket=bucket, Key=key, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/client.py", line 272, in _api_call return self._make_api_call(operation_name, kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/client.py", line 576, in _make_api_call raise error_class(parsed_response, operation_name) ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden We do not assume role with the Splunk users, they have the policy below applied: { "Version": "2012-10-17", "Statement": [ { "Sid": "splunk", "Effect": "Allow", "Action": [ "sts:AssumeRole", "sqs:SendMessage", "sqs:ReceiveMessage", "sqs:ListQueues", "sqs:GetQueueUrl", "sqs:GetQueueAttributes", "sqs:DeleteMessage", "sns:Publish", "sns:List*", "sns:Get*", "s3:*", "s3:ListBucket", "s3:ListAllMyBuckets", "s3:GetObject", "s3:GetLifecycleConfiguration", "s3:GetBucketTagging", "s3:GetBucketLogging", "s3:GetBucketLocation", "s3:GetBucketCORS", "s3:GetAccelerateConfiguration", "rds:DescribeDBInstances", "logs:GetLogEvents", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "lambda:ListFunctions", "kms:Decrypt", "kinesis:ListStreams", "kinesis:Get*", "kinesis:DescribeStream", "inspector:List*", "inspector:Describe*", "iam:ListUsers", "iam:ListAccessKeys", "iam:GetUser", "iam:GetAccountPasswordPolicy", "iam:GetAccessKeyLastUsed", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeInstanceHealth", "ec2:DescribeVpcs", "ec2:DescribeVolumes", "ec2:DescribeSubnets", "ec2:DescribeSnapshots", "ec2:DescribeSecurityGroups", "ec2:DescribeReservedInstances", "ec2:DescribeRegions", "ec2:DescribeNetworkAcls", "ec2:DescribeKeyPairs", "ec2:DescribeInstances", "ec2:DescribeImages", "ec2:DescribeAddresses", "config:GetComplianceSummaryByConfigRule", "config:GetComplianceDetailsByConfigRule", "config:DescribeConfigRules", "config:DescribeConfigRuleEvaluationStatus", "config:DeliverConfigSnapshot", "cloudwatch:List*", "cloudwatch:Get*", "cloudwatch:Describe*", "cloudfront:ListDistributions", "autoscaling:Describe*" ], "Resource": "*" } ] } At this point we have even tried getting the logs in via Generic S3 and Cloudtrail input types, none of them work.
I have the following from a client: I was about to make is for a new AD group “Splunk_CAPS_CAS_Payments” so that they are restricted to data that come from the source logging files //LOG_CAPS_CAS... See more...
I have the following from a client: I was about to make is for a new AD group “Splunk_CAPS_CAS_Payments” so that they are restricted to data that come from the source logging files //LOG_CAPS_CAS_PaymentEDMS//CAPS//CAS_IN_msg_prod.log and //LOG_CAS_CAPS_InvoiceGL//CAS//CAPS_IN_msg_prod.log Is this possible? These users would only be able to pull the logs from the one CAPS_IN_msg_prod.log. How can this be done?
Hey All, Off the wall question and was curious if anyone has tried this or not and if its advisable. Looking to add some new indexers (linux) to our cluster and in an attempt to save time our ser... See more...
Hey All, Off the wall question and was curious if anyone has tried this or not and if its advisable. Looking to add some new indexers (linux) to our cluster and in an attempt to save time our server admin has suggested cloning an existing indexer and performing whatever changes need to be made to hostname, config files, etc. Two options I see: 1) Clone entire system and use clone-prep-clear-config. Will this even work with an Indexer? 2) Clone entire system and nuke /opt/splunk directory and do a fresh install. Are there any other concerns with this method? Thanks! Andrew
Hi Experts, Hi have existing inputlookup file like test.csv which contains 3 fields like host source sourcetype, i want to add extra one new filed called _time with these 3 fields. I have tried wi... See more...
Hi Experts, Hi have existing inputlookup file like test.csv which contains 3 fields like host source sourcetype, i want to add extra one new filed called _time with these 3 fields. I have tried with basesearch | table host source soursetype _time|outputlookup test.csv append=true but new field is not appending for example if I have this existing csv file contains 100 rows, then these 3 fileds along with new filed results shold be added from 101th row onwards in the csv. Plz help on this and thanks in advance.
Hi experts, I have a multiple errors like "***error occured" , "failed error **** " and etc, I need to check what are errors occurred in the last 24 hours, whether the same errors occurred in th... See more...
Hi experts, I have a multiple errors like "***error occured" , "failed error **** " and etc, I need to check what are errors occurred in the last 24 hours, whether the same errors occurred in the last 60 days or not, if not occurred in the past then trigger a alert which means these are the new errors we considering. Please help on this. Thanks in advance.
I have two types of events, where the important data looks like this: [ { "acknowledged": false, "time": 1588289278000, }, { "acknowledged": { "time": 1588232449000, ... See more...
I have two types of events, where the important data looks like this: [ { "acknowledged": false, "time": 1588289278000, }, { "acknowledged": { "time": 1588232449000, "username": "admin" }, "time": 1588145193000, } ] Per day, I want a bar chart of the count of the events that contains an acknowledge object. I also want to plot a line that contains the average acknowledgement time (acknowledged.time - time).
Hi All, We tried to install Java agent in Windows. The agent installed and started successfully. After a few minutes, the agent stops reporting to the Controller. Error log- [Attach API initi... See more...
Hi All, We tried to install Java agent in Windows. The agent installed and started successfully. After a few minutes, the agent stops reporting to the Controller. Error log- [Attach API initializer] 14 May 2020 15:54:06,866 WARN AgentErrorProcessor - Agent error occurred, [name,transformId]=[com.singularity.bci.TransformationManager - com.singularity.ee.agent.appagent.services.bciengine.TimeoutWaitingForLockException,2147483647] [Attach API initializer] 14 May 2020 15:54:06,866 WARN AgentErrorProcessor - 4 instance(s) remaining before error log is silenced -Pavan ^ Edited by @Ryan.Paredez improved title and readability
Hi, I'm trying to make a Splunk panel display a value from a log that gets added to every 4 minutes. I need to be able to see on the dashboard if the value suddenly drops. I've tried extractin... See more...
Hi, I'm trying to make a Splunk panel display a value from a log that gets added to every 4 minutes. I need to be able to see on the dashboard if the value suddenly drops. I've tried extracting the value, but it keeps messing up. Should I use regex, or do I need to extract it in a different way? My goal is to only get the value after "value= " to return. This is how the data looks when it's imported into Splunk, each new line is a single event: 2020-05-14T13:39:28.423Z, machine= wefqwr2312, value= 14 2020-05-14T13:40:29.003Z, machine= wefqwr2312, value= 14 2020-05-14T13:40:29.118Z, machine= wefqwr2312, value= 14 2020-05-14T13:41:28.316Z, machine= wefqwr2312, value= 14 2020-05-14T13:41:28.323Z, machine= wefqwr2312, value= 14 2020-05-14T13:45:48.032Z, machine= wefqwr2312, value= 14 2020-05-14T13:45:48.041Z, machine= wefqwr2312, value= 14 Thanks!
vue中怎么使用splunk-sdk来实现搜索功能,我在项目中通过nodejs引入splunk-sdk使用splunkjs.ProxyHttp('/proxy'),报错说没有ProxyHttp构造方法,需要怎么解决
Hi, Please help, I want to get the xaxis values in a bar chart. In the image attached, i have a query which doesnot have transpose command where I can see the values appearing for xaxis then when ... See more...
Hi, Please help, I want to get the xaxis values in a bar chart. In the image attached, i have a query which doesnot have transpose command where I can see the values appearing for xaxis then when i tried to change the color for each bar using transpose command then suddenly the xaxis values does not appear. In the query i m displaying top highest 10 values
I want to create a dashboard which should have a table where each row is a dashboard and columns are panels of the dashboard. And the value should be either "Found Results" (if the panel of the speci... See more...
I want to create a dashboard which should have a table where each row is a dashboard and columns are panels of the dashboard. And the value should be either "Found Results" (if the panel of the specific dashboard has results) or "No Results" ( if the panel returns no results). Thanks in Advance
We have recently turned on journaling within MS Exchange which basically sends a copy of every item to a journaling mail box. We know the email address the process uses and this appears in the messag... See more...
We have recently turned on journaling within MS Exchange which basically sends a copy of every item to a journaling mail box. We know the email address the process uses and this appears in the message tracking logs as additional emails - in short journaling has doubled our Splunk licence usage!! We would like to exclude the journaling email address from being indexed. Exchange can't turn off message tracking for certain email addresses We are using UF on the Exchange servers with a load balanced intermediate level of Heavy forwarders. We have tried to apply the exclusion based on this answer - Answer 289736 - how to exclude a sourcetype from being indexed, but using a regex that picks up the journaling email address. The address we want to exclude is: journal@ev.local We've tried it on the Intermediate Heavy forwarders and on the index servers with no effect. The config we have applied is: props.conf [MSExchange:2013:MessageTracking] TRANSFORMS-JournalRemoval = JournalRemoval transforms.conf [JournalRemoval] REGEX =.*journal@ev.* DEST_KEY = queue FORMAT = nullqueue Any ideas why this might not be working? Thanks
I am working on approach to upload logs to splunk,I have set of queries to query in logs and extract the values.How to run queries as soon as one user uploads logs without himself querying and give ... See more...
I am working on approach to upload logs to splunk,I have set of queries to query in logs and extract the values.How to run queries as soon as one user uploads logs without himself querying and give him the results
Hello Experts, We are having list of workflow actions in field menu and event menu which are sorted alphabetically. My expectations are having that order as per our expectations. (E.g . Asset Inv... See more...
Hello Experts, We are having list of workflow actions in field menu and event menu which are sorted alphabetically. My expectations are having that order as per our expectations. (E.g . Asset Investigator is more imp than Access search so I want Asset Center on top before Access Center) so is it doable thing? is yes can you suggest how can we achieve this? Thanks in Advance
Hi, Has anyone run the MS Windows AD Objects version 3.2.9 APP on Splunk Enterprise 8.0.x? If so, how was your experience... did you get it to work... did you have to do anything special to get ... See more...
Hi, Has anyone run the MS Windows AD Objects version 3.2.9 APP on Splunk Enterprise 8.0.x? If so, how was your experience... did you get it to work... did you have to do anything special to get it working? Any one know when might a version of the APP compatible with Splunk Enterprise 8.0.x be available? Thanks in advance for your feedback.
Hello fellow splunkers, i want to create an alert for the following search. The search creates a statistics matrix which list the number of events from a host for the timespan defined in the sear... See more...
Hello fellow splunkers, i want to create an alert for the following search. The search creates a statistics matrix which list the number of events from a host for the timespan defined in the search. index=wineventlog source="WinEventLog:Security" host=testsrv1 OR host=dc* | timechart span=6h count by host limit=100 I want to define a threshold value for events in that timespan. If one of the host would drop below this threshold in my 6h timespan an alert should be triggered. There i could define a Email/SMS Messaging etc. I've attached a picture - my goal would be to detect an unnormal behaviour like a drop or a very high peak. I'm not sure if i can have a dynamic threshold or somehting like that - but a static threshold would be good for the moment. BR vess
Clicked on Splunk Add-on Builder App Opens blank page , And in the Messages showing "After installing Splunk Add-on Builder, why do I receive error "Unable to initialize modular input "validation_mi"... See more...
Clicked on Splunk Add-on Builder App Opens blank page , And in the Messages showing "After installing Splunk Add-on Builder, why do I receive error "Unable to initialize modular input "validation_mi" Splunk version 7.2.5 Splink Add-on Builder app version 2.2.0 Installed the latest version 3.0.1 seeing the same message as above It will be helpful if someone provide the solution
I have a search from an input looup and i have appended search results from an index so i can overlay some results but the dates are not matching up. | inputlookup user | where stat1=1 OR stat2=... See more...
I have a search from an input looup and i have appended search results from an index so i can overlay some results but the dates are not matching up. | inputlookup user | where stat1=1 OR stat2=1 AND tonumber(strftime(_time,"%Y")) > 2019 | eval Date = strftime(_time,"%d") | timechart span=1d max(Date) as days, count as Registrations | eval DayRate = round(Registrations /days, 1) | fields - days | appendcols [search index="index1" | stats values(Confirmed) by Date] This shows me results as per the below where the overlay red line should match the dates of the rest of the graph As you can see the dates are not aligning as per the below if you can advise on how i can correlate the dates to the same row. I tried append and join with no luck