All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I'm testing a connection that i have created using orcale db acccount, so i need to call or execute this query below. so that i will be able to create a dashboard. DECLARE v_cursor SYS_REF... See more...
Hi all, I'm testing a connection that i have created using orcale db acccount, so i need to call or execute this query below. so that i will be able to create a dashboard. DECLARE v_cursor SYS_REFCURSOR; faID varchar(12); despriction varchar(50); currency varchar(3); amount number ; startD varchar(8); endD varchar(8); loID varchar (12); Unit varchar(50); begin TM_MAIN('B','00024657', v_cursor); LOOP FETCH v_cursor INTO facility_ID,despriction, currency , amount , startD, endD ,loId, Unit; EXIT WHEN v_cursor%NOTFOUND; DBMS_OUTPUT.PUT_LINE(faID || ' | ' || despriction || ' ' || currency || ' ' || amount || ' ' || startD || ' ' || endD || ' ' || loId || ' | ' || glUnit ); END LOOP; CLOSE v_cursor; END;   
We are Planning to set up Threat feed integrate in ES, We have installed crowdstrike Intel add on and now need to set up  threat feeds . Can you Please suggest and guide us is there any specific gui... See more...
We are Planning to set up Threat feed integrate in ES, We have installed crowdstrike Intel add on and now need to set up  threat feeds . Can you Please suggest and guide us is there any specific guide for how to do this with cs threat intel.    
I've searched quite some time, but I'm not able to find why Splunk is not recognizing a nested JSON. Here's how my data/events looks in raw text ( path of the data is SES->SNS->Lambda->HEC)     {... See more...
I've searched quite some time, but I'm not able to find why Splunk is not recognizing a nested JSON. Here's how my data/events looks in raw text ( path of the data is SES->SNS->Lambda->HEC)     {"Records":[{"EventSource":"aws:sns","EventVersion":"1.0","EventSubscriptionArn":"arn:aws:sns:eu-north-1:doesntmatter","Sns":{"Type":"Notification","MessageId":"87b93315-f1f6-56f8-83dc-6b099eb5e18e","TopicArn":"arn:aws:sns:eu-north-1:doesntmatter","Subject":null,"Message":"{\"notificationType\":\"Delivery\",\"mail\":{\"timestamp\":\"2020-11-04T08:57:37.646Z\",\"source\":\"email@email.com\",\"sourceArn\":\"arn:aws:ses:eu-north-1:MYaccountID:identity/email@email.com\",\"sourceIp\":\"X.X.X.X\",\"sendingAccountId\":\"MYaccountID\",\"messageId\":\"011001759279ce6e-67642459-31fa-4f4b-b852-315ea7e8d284-000000\",\"destination\":[\"email@email.com\"],\"headersTruncated\":false,\"headers\":[{\"name\":\"Received\",\"value\":\"from ip-X.X.X.X.eu-north-1.compute.internal (ec2-X.X.X.X.eu-north-1.compute.amazonaws.com [X.X.X.X]) by email-smtp.amazonaws.com with SMTP (SimpleEmailService-d-090KRTZ85) id RANDOM for email@email.com; Wed, 04 Nov 2020 08:57:37 +0000 (UTC)\"},{\"name\":\"Content-Type\",\"value\":\"multipart/mixed; boundary=\\\"===============digits==\\\"\"},{\"name\":\"MIME-Version\",\"value\":\"1.0\"},{\"name\":\"Subject\",\"value\":\"to me from NEW NEW\"},{\"name\":\"To\",\"value\":\"email@email.com\"},{\"name\":\"From\",\"value\":\"email@email.com\"},{\"name\":\"Date\",\"value\":\"Wed, 04 Nov 2020 08:57:37 +0000\"},{\"name\":\"X-Priority\",\"value\":\"3\"},{\"name\":\"X-Splunk-SID\",\"value\":\"digits.2\"},{\"name\":\"X-Splunk-ServerName\",\"value\":\"ip-X.X.X.X.eu-north-1.compute.internal\"},{\"name\":\"X-Splunk-Version\",\"value\":\"8.1.0\"},{\"name\":\"X-Splunk-Build\",\"value\":\"f57c09e87251\"}],\"commonHeaders\":{\"from\":[\"email@email.com\"],\"date\":\"Wed, 04 Nov 2020 08:57:37 +0000\",\"to\":[\"email@email.com\"],\"subject\":\"to me from NEW NEW\"}},\"delivery\":{\"timestamp\":\"2020-11-04T08:57:39.153Z\",\"processingTimeMillis\":1507,\"recipients\":[\"email@email.com\"],\"smtpResponse\":\"250 2.6.0 <digits@eu-north-1.amazonses.com> [InternalId=digits, Hostname=random hostname] 12107 bytes in 0.057, 206.415 KB/sec Queued mail for delivery\",\"remoteMtaIp\":\"Y.Y.Y.Y\",\"reportingMTA\":\"e240-9.smtp-out.eu-north-1.amazonses.com\"}}","Timestamp":"2020-11-04T08:57:39.198Z","SignatureVersion":"1","Signature":"SIGNATURE","SigningCertUrl":"https://sns.eu-north-1.amazonaws.com/SimpleNotificationService-PEM.pem","UnsubscribeUrl":"https://sns.eu-north-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-north-1:doesntmatter","MessageAttributes":{}}}]}     I'm sending the data to :8088/collector  . Here's the screenshot of how the data looks like in syntax:   I would like to extract some fields from "Message" which is a valid JSON, however I'm not able to do it since Splunk is not recognizing it as a JSON and my knowledge about filtering using rex is below 0  . I believe it has something to do with the backslashes, but when I use to remove them:     SEDCMD-replace_backslash = s/\\//g     in my props.conf, Splunk stops to recognize the whole event as json and it's not formatting it (showing as raw when I do a search). My props.conf looks like:       [ses_json_new] category = Custom pulldown_type = 1     If I add to props.conf     KV_MODE = json       Nothing happens, nor with:     INDEXED_EXTRACTIONS = json     Appreciate your help and please excuse me for my nescience. Thank you.
Hi All, We are trying to break the multi-line events into single events by building the customizing the configuration provided in the Splunk_TA_AWS Add-on.   Reason for doing this is testing ... See more...
Hi All, We are trying to break the multi-line events into single events by building the customizing the configuration provided in the Splunk_TA_AWS Add-on.   Reason for doing this is testing as we want to break the json payload uploaded as individual events ( { id: , timestamp:, message: } ), extract the payload level logGroup: and map it to source meta field and send the payload level unnecessary data to nullQueue. When we test the below configuration in the live stream of data, the Splunk is unable to break the multiple events in to single Events.   Props.conf [aws:kinesis] SHOULD_LINEMERGE=false LINE_BREAKER=(\[|,\s*|\], )({"id":|"logGroup":) disabled=false MAX_TIMESTAMP_LOOKAHEAD=13 TIME_FORMAT=%s%3Q TIME_PREFIX="timestamp":\s+ TZ=UTC TRUNCATE=100000 In aws_kinesis_tasks.conf [unify_timestamp_test] account = splunk-TA-aws-instance-role aws_iam_role = test_acc_np index = unify_main init_stream_position = LATEST region = ap-southeast-2 sourcetype = aws:kinesis stream_names = test-kin-splunkSharpIngestionLogStream disabled = 1 But it perfectly working fine when we upload sample raw data from the Live stream into the test environment and splunk breaking the multiple events into single events. I have attached the snap shot for the reference. Sample data:  { "owner": "111111111111", "logGroup": "CloudTrail", "logStream": "111111111111_CloudTrail_us-east-1", "subscriptionFilters": [ "Destination" ], "messageType": "DATA_MESSAGE", "logEvents": [ { "id": "31953106606966983378809025079804211143289615424298221568", "timestamp": 1432826855000, "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}" }, { "id": "31953106606966983378809025079804211143289615424298221569", "timestamp": 1432826855000, "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}" }, { "id": "31953106606966983378809025079804211143289615424298221570", "timestamp": 1432826855000, "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}" } ] } PIC-1 -- Displays events when splunk parsing and ingesting the live stream of data from Kinesis. PIC-2 -- When same sample data is uploaded in the test environment, it is breaking the multiple event into each single events, using the Line_Break stanza =(\[|,\s*|\], )({"id":|"logGroup":)                  
Hi Splunkers, I would like to ask your options on my dashboard. I have just started with Splunk for almost 2 months so why not asking experts on Splunk Community.  Problem:  By clicking on the line... See more...
Hi Splunkers, I would like to ask your options on my dashboard. I have just started with Splunk for almost 2 months so why not asking experts on Splunk Community.  Problem:  By clicking on the line chart 1(imagine: % sales by months). The line chat will transform to multi series line chart 2(imagine: % sales by categories by months). To go back to : Click on any of the multi series line chart 2. However, 1. The multi series line chart  2 contains basedsearch + join on (_time and categories) + [subsearch]. It is quite complicated query to build. 2. the goal is delivering Splunk views on our own website through Splunk JS Stack. I have done some charts/ tables views but have never sending drilldown option, so I am kind of worry about the limit of drilldown actions on SplunkJS. I hope that I will not facing any problem with JS. My solution for the moment is : a hidden multi series line chart 2 popup when clicking on the chart 1(token $click.value2$). To go back, i am going to create a check box with token to go back. I totally not happy with my solution but I dont have any other ideals. My solution is creating 2 line charts in stead of updating the original chart and I need a check box or some thing to go back.  looking forward to hearing your discussion 
We have dashboard with 50k events coming everyday. If a user tried to export results for last 30 days and they are not getting full results is there a way they could be notified. How can we identify ... See more...
We have dashboard with 50k events coming everyday. If a user tried to export results for last 30 days and they are not getting full results is there a way they could be notified. How can we identify if the data exported matches that in dashboard.
hi As you can see at the end of my search, I use a where condition But sometimes, even if the condition is true ('Geolocation building' = 'SNOW building'), the events is displayed what is wrong pl... See more...
hi As you can see at the end of my search, I use a where condition But sometimes, even if the condition is true ('Geolocation building' = 'SNOW building'), the events is displayed what is wrong please?     `wire` | fields AP_NAME USERNAME LAST_SEEN | eval USERNAME=upper(USERNAME) | eval LAST_SEEN=strptime(LAST_SEEN, "%Y-%m-%d %H:%M:%S.%1N") | lookup ap.csv NAME as AP_NAME OUTPUT Building Country Site | lookup fo_all HOSTNAME as USERNAME output SITE ROOM COUNTRY BUILDING_CODE | eval Building=upper(Building) | eval Site=upper(Site) | eval SITE=upper(SITE) | eval Building=upper(Building) | eval LAST_SEEN = strftime(LAST_SEEN, "%Y-%m-%d %H:%M") | stats last(LAST_SEEN) as "Last check date", last(AP_NAME) as "Access point", last(Site) as "Geolocation site", last(Building) as "Geolocation building", last(SITE) as "SNOW site", last(BUILDING_CODE) as "S building" by USERNAME | where NOT ('Geolocation building' = 'S building')   I have tested with : | search NOT ("Geolocation building" = "S building")   but same thing
As per below screenshot, my token is not working while put this search in panel. Please let me why my token is not working properly.   My token is "mfg_host"  
Hello  I need an urgent help. I created HEC data inputs. I did follow these guidelines. https://docs.splunk.com/Documentation/Splunk/8.1.0/Data/HECExamples https://docs.splunk.com/Documentation/S... See more...
Hello  I need an urgent help. I created HEC data inputs. I did follow these guidelines. https://docs.splunk.com/Documentation/Splunk/8.1.0/Data/HECExamples https://docs.splunk.com/Documentation/Splunk/8.1.0/Data/UsetheHTTPEventCollector The test was success and I'm able to get  {"text": "Success", "code": 0} However, the index was still empty which I'm expecting it should contains the message data. What would be the reason? Our Splunk Deployment is like below 1 Searchead Instance 2 Indexer Instance 4 Forwarder Instance.   I created the HEC on Searchead via GUI.   Please help to advice and thanks in advance
Hi Can someone please help me out here ? I want to forward particular event to target server and other will receive the all logs by default as it is in default group. Transforms.conf [logs_type1]... See more...
Hi Can someone please help me out here ? I want to forward particular event to target server and other will receive the all logs by default as it is in default group. Transforms.conf [logs_type1] REGEX = (logged out|Rejected password for user|Cannot login|logged in as|Accepted user for user|was updated on host|Password was changed for account|Destroy VM called) DEST_KEY = _TCP_ROUTING FORMAT = esxireceivier   Props.conf [vmw_logs] TRANSFORMS-routing=logs_type1   Is my configuration fine or someone can help me out here ?
I was trying to sign up and register for the fundamentals 2 course but it keeps wanting to charge me and I am a veteran. Is it because my account is still being processed and I should try again later... See more...
I was trying to sign up and register for the fundamentals 2 course but it keeps wanting to charge me and I am a veteran. Is it because my account is still being processed and I should try again later after I complete fundamental 1 or is there a code that needs to be entered? Thanks
Is it possible to drop events if they occur within a certain timespan of each other? I'm specifically looking at VMware View logs and trying to corelate external user login sessions.  Normally there ... See more...
Is it possible to drop events if they occur within a certain timespan of each other? I'm specifically looking at VMware View logs and trying to corelate external user login sessions.  Normally there is a BROKER_USERLOGGEDIN event and then AGENT RECONNECT/CONNECT events.  Unfortunately everyonce in a while there is a network hiccup and a client disconnects/reconnects without a BROKER_USERLOGGEDIN event (like the attached picture). I want to ignore/drop any EventType=AGENT_DISCONNECT and EventType=AGENT_RECONNECT if they happen within 60 seconds.  
I'm trying to extract multiple values for a single field. I've got the beginnings of the regex sorted to extract it, but I don't know how to separate the values. For below rather than have gzip,defla... See more...
I'm trying to extract multiple values for a single field. I've got the beginnings of the regex sorted to extract it, but I don't know how to separate the values. For below rather than have gzip,deflate as a single value I'd like gzip and deflate as separate values. Does anyone have any advice? Regex: Accept-Encoding: (?P<encoder>\w+.*?)\\n>> Log: Accept-Encoding: gzip,deflate\n>> Results: (attached image)
Hello? It was sorted by clicking on the field name within the "Lookup Editor APP" that we used in the past, but not now... I'm working another company now... Do I need any other settings?
Hi, I am dealing with a situation where many users in Production are taking clones of knowledge objects e.g. scheduled reports, views etc. How to prevent this? Any settings in the configurations and... See more...
Hi, I am dealing with a situation where many users in Production are taking clones of knowledge objects e.g. scheduled reports, views etc. How to prevent this? Any settings in the configurations and roles capabilities I can apply to prevent this? Our configuration settings for access controls has read only for users. Write is for developers and admins only. This still does not prevent users from taking clones. Thanks in advance!!!
Hello, when I try to install from file the Webtools Add-on, I receive the following message:  "Unable to initialize modular input "test_port_input" defined in the app "TA-webtools": Introspecting sc... See more...
Hello, when I try to install from file the Webtools Add-on, I receive the following message:  "Unable to initialize modular input "test_port_input" defined in the app "TA-webtools": Introspecting scheme=test_port_input: script running failed (exited with code 1).." Is anyone familiar with this error message? Add-on: https://splunkbase.splunk.com/app/4146/ Version 1.3.0
We have multiple use case where service accounts needs to be connected through Splunk API but we want to make sure service account should not be use for Splunk web logins. Possible?
Hello, we configured generic S3 to pull logs from centralized bucked for cloudtrail. these log files for different accounts are encrypted by KMS. we gave the right permissions to the account that we... See more...
Hello, we configured generic S3 to pull logs from centralized bucked for cloudtrail. these log files for different accounts are encrypted by KMS. we gave the right permissions to the account that we used but splunk is not reading these encrypted zip files. also it's not organizing/parsing the logs by Account. please help me to troubleshoot this. Thanks RR
I have an Index Cluster that is running 7.2.4.2 version, if I have a Search Head that is running 7.3.0 version will it have any issues connecting to do searches?  
After we create an alert for Splunk App for Infrastructure, we can choose one of following default alert  methods: email, VictorOps, Slack and  Custom Web-hook.  instead of using these default method... See more...
After we create an alert for Splunk App for Infrastructure, we can choose one of following default alert  methods: email, VictorOps, Slack and  Custom Web-hook.  instead of using these default methods, we want to run a script once an alert is triggered.   Is this alert action possible? if yes, how can we do it?