All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi fellow Splunk users, I need help to set up search query (later will be saved as an alert) to check failed login attempts to our ec2 instances. In my organization, we dont allow SSH login. O... See more...
Hi fellow Splunk users, I need help to set up search query (later will be saved as an alert) to check failed login attempts to our ec2 instances. In my organization, we dont allow SSH login. On top of that, I also want to see if a person tried to change any sensitive config files inside that instance. Logs are already coming from aws cloudtrail, below is what I got so far. Thanks in advance for all the help and input. index="main" sourcetype="aws:cloudtrail" | spath errorCode | search errorCode=AccessDenied { [-] awsRegion: eu-west-1 errorCode: AccessDenied errorMessage: User: User is not authorized to perform: glue:GetSecurityConfigurations eventID: faf2053d-2bd2-41b3-93ff-a7e841979cea eventName: GetSecurityConfigurations eventSource: glue.amazonaws.com eventTime: 2020-02-20T20:43:14Z eventType: AwsApiCall eventVersion: 1.05 recipientAccountId: 155166966842 requestID: 86a20648-687a-4c3e-9f4a-ce07f1704217 requestParameters: null responseElements: null sourceIPAddress: 18.221.72.80 userAgent: aws-sdk-java/1.11.699 Linux/4.14.77-70.59.amzn1.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.202-b08 java/1.8.0_202 groovy/2.4.15 vendor/Oracle_Corporation userIdentity: { [+] } }
So basically what I'm trying to do it that I want a radio button at the top of the page and depending on one of the four choices of sais radio button, make a whole swath of panels to appear/disappear... See more...
So basically what I'm trying to do it that I want a radio button at the top of the page and depending on one of the four choices of sais radio button, make a whole swath of panels to appear/disappear. I'm having troubles trying to figure out how to use the tokens to achieve this. Thanks in advance for any help! (I should also mention that I'm running 7.2.5)
Hi, I installed and configured UF on a Linux server to send syslog to Splunk HF. I am now trying to send an application log also on the same server, say it's in /opt/application/applog.log, to the ... See more...
Hi, I installed and configured UF on a Linux server to send syslog to Splunk HF. I am now trying to send an application log also on the same server, say it's in /opt/application/applog.log, to the HF. What I need to modify on the UF .conf file(s) ? Thanks.
So my below query gives the result of Rejection % but I need to also filter this one step more where it should not show me the results where last 3 consecutive occurrences of a merchantId had been s... See more...
So my below query gives the result of Rejection % but I need to also filter this one step more where it should not show me the results where last 3 consecutive occurrences of a merchantId had been status "CONFIRMED", is this possible? index=apps status=CONFIRMED OR status=REJECTED partner_account_name="Level Up" | stats count by status, merchantId | xyseries merchantId, status, count | eval result = (REJECTED)/((CONFIRMED+REJECTED))*100 | eval count = CONFIRMED + REJECTED | where count >= 5 | where result >= 20 | sort result desc
I wanted to upgrade Splunk Enterprise 7.2 to Splunk Enterprise 8.1 on Redhat linux 7.7. As per the splunk upgrade docs, I found that splunk web is compatible only with Python 3.7. Here is the link h... See more...
I wanted to upgrade Splunk Enterprise 7.2 to Splunk Enterprise 8.1 on Redhat linux 7.7. As per the splunk upgrade docs, I found that splunk web is compatible only with Python 3.7. Here is the link https://docs.splunk.com/Documentation/Splunk/8.0.2/Installation/PlanPython3 I already have python 3.6 on RHEL 7.7 . My question is python 3.6 compatible with Splunk Enterprise 8.1 or do I need to upgrade python to 3.7 Thanks in advanced
I try to use flush on custom command and not working. I used generatetext.py from searchcommands_app and put self.flush() and the search done with errors. def generate(self): text ... See more...
I try to use flush on custom command and not working. I used generatetext.py from searchcommands_app and put self.flush() and the search done with errors. def generate(self): text = self.text self.logger.debug("Generating %d events with text %s" % (self.count, self.text)) for i in range(1, self.count + 1): yield {'_serial': i, '_time': time.time(), '_raw': six.text_type(i) + '. ' + text} self.flush() Error: 02-20-2020 14:32:12.814 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 02-20-2020 14:32:12.990 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 02-20-2020 14:32:12.990 ERROR ChunkedExternProcessor - Failed to write buffer of size 17 to external process file descriptor (The pipe is being closed.) 02-20-2020 14:32:13.024 ERROR ChunkedExternProcessor - Failure writing result chunk, buffer full. External process possibly failed to read its stdin. 02-20-2020 14:32:13.024 ERROR ChunkedExternProcessor - Error in 'generatetext' command: Failed to send message to external search command, see search.log. 02-20-2020 14:32:13.024 INFO ReducePhaseExecutor - Ending phase_1 02-20-2020 14:32:13.024 INFO UserManager - Unwound user context: admin -> NULL 02-20-2020 14:32:13.024 ERROR SearchOrchestrator - Phase_1 failed due to : Error in 'generatetext' command: Failed to send message to external search command, see search.log. 02-20-2020 14:32:13.025 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL 02-20-2020 14:32:13.025 INFO DispatchExecutor - User applied action=CANCEL while status=0 02-20-2020 14:32:13.025 ERROR SearchStatusEnforcer - sid:1582219925.18 Error in 'generatetext' command: Failed to send message to external search command, see search.log. 02-20-2020 14:32:13.025 INFO SearchStatusEnforcer - State changed to FAILED due to: Error in 'generatetext' command: Failed to send message to external search command, see search.log. 02-20-2020 14:32:13.090 INFO UserManager - Unwound user context: admin -> NULL Any Help ??
Hello Im running splunk data model acceleration And it stopped working. It is stuck in skipping and nothing happens With “summariesonly=true” i get no results but if i set it to false i get res... See more...
Hello Im running splunk data model acceleration And it stopped working. It is stuck in skipping and nothing happens With “summariesonly=true” i get no results but if i set it to false i get results Also, ive created new one event base and its working The first one was search base I couldn’t find any errors in the logs Any suggestions?
This is more of a bug report than a question. I am working with the Billing data coming from our new GCP environment and have found that CSV data where a string contains a comma is not being parse... See more...
This is more of a bug report than a question. I am working with the Billing data coming from our new GCP environment and have found that CSV data where a string contains a comma is not being parsed properly. I have discovered this from data where the Credit1 field says "External IPs will not be charged until April 1, 2020." It splits the row into two columns, pushing all data out by one and therefore losing the Description Field value from that event. The data looks like this is Splunk *{ [-] Account ID: 111111-222222-333333 Cost: GBP Credit1: "External IPs will not be charged until April 1 Credit1 Amount: 2020." Credit1 Currency: -0.156831 Currency: 0.156831 Description: End Time: 2020-02-20T00:00:00-08:00 Line Item: com.google.cloud/services/compute-engine/ExternalIp Measurement1: com.google.cloud/services/compute-engine/ExternalIp Measurement1 Total Consumption: 183661 Measurement1 Units: seconds Project: 12344566778 Project ID: 12344566778 Project Labels: project Project Name: project Project Number: GBP Start Time: 2020-02-19T00:00:00-08:00 } * This data parsing isnt done in the props of the sourcetype but in the python code. Line 260 of Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/billing.py I'm not an expert in GCP data, so I dont know what the best course of action would be to fix this bug without breaking other things. My assumption is that no data values will have "word,word" but "word, word" with a space, so would the fix here to change the split from being just a comma to a comma and no space afterwards. ie in python line = line.split(',[^\s]')
I setup syslog output forwarding per the Splunk docs, but am not seeing anything being sent out nor receiving it on the endpoint. All I'm trying to do is forward some data to syslog server via TCP... See more...
I setup syslog output forwarding per the Splunk docs, but am not seeing anything being sent out nor receiving it on the endpoint. All I'm trying to do is forward some data to syslog server via TCP port from a heavyforwarder. Here is what I have applied on the heavyforwarder outputs.conf Outputs.conf on heavy forwarder [syslog] defaultGroup = forwarders_syslog [syslog:forwarders_syslog] server = syslog_hostname:port clientCert = $SPLUNK_HOME/etc/auth/output-cert.pem maxQueueSize = 20MB sslPassword = xxxxxxx type=tcp sendCookedData=false indexAndForward = 1 compressed = true sslVerifyServerCert = false Note :- The configuration for forwarding the data to syslog can be found under [syslog:forwarders_syslog] Props.conf on heavy forwarder [sourcetype::XYZ] TRANSFORMS-ABC_DEF= send_to_ABC_DEF The following is transforms.conf on heavy forwarder [send_to_ABC_DEF] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = forwarders_syslog I tried the following troubleshooting steps to identify the root cause and don't find any Able to telnet to the syslog server from heavy forwarder with the port specified in outputs.conf tried the netstat -tnlp on the destination server and see the required port is listening and open. seeing some traffic between source and destination. Not sure what else should I be checking on to identify the root cause and fix the issue. Although I do see an error on splunkd.log as follows ERROR OutputProc - Failed to send data to syslog_hostname:port. Failed to send data with TCPClient::send. err=-3 Also seeing the below blocked=true in metrics.log INFO Metrics - group=queue, name=forwarders_syslog, blocked=true, max_size_kb=97, current_size_kb=97, current_size=147, largest_size=150, smallest_size=26
Hi all, First, I do apologise if this is clearly answered in Answers or Documentation; I have spent some time in both, and have still to find an answer. Second, I am very new to Splunk. In fact... See more...
Hi all, First, I do apologise if this is clearly answered in Answers or Documentation; I have spent some time in both, and have still to find an answer. Second, I am very new to Splunk. In fact, this question comes directly from Fundamentals One; a throw-away comment in Module 8, to be specific. And so, my question: on the subject of search performance, and field extraction in particular, the instructor states that field inclusion can provide a boost, as it occurs before field extraction; he then goes on to say that field exclusion offers no such benefit, as it occurs after field extraction. I'm trying to wrap my head around why this is the case; that is, why field exclusion differs so markedly from field inclusion, in terms of what Splunk knows about the entire search at the point of field extraction. Thanks! And apologies for any stumbles re lexicon/vocabulary. John
Hi, Can someone help with regex expression to mask the below kind of pattern. I need this pattern of text to be masked wherever I find it in my events. 12/KQXA/123456/ABXY --> **************ABX... See more...
Hi, Can someone help with regex expression to mask the below kind of pattern. I need this pattern of text to be masked wherever I find it in my events. 12/KQXA/123456/ABXY --> **************ABXY 11/VAXA/123456 /VAQY --> **************VAQY 00/LCXA/545232/GYFT --> **************GYFT
Is there a more secure way to make client connection to splunk other than username and password (in conf file)? (such as system account using certificate?)
Hi...a newbie here. I've been absorbing training materials and looking through questions here but find myself stuck on something I'm hoping is easily fixed. I have a timechart using a search of unsig... See more...
Hi...a newbie here. I've been absorbing training materials and looking through questions here but find myself stuck on something I'm hoping is easily fixed. I have a timechart using a search of unsightly error messages which I rename for readability on chart. When I click on a chart bar, the linked search page opens, however the search string uses the renamed values in my chart's query. For example, when I click on the bar labelled 'DB-connection error' for some date, the search opened uses the renamed value ('DB-connection error') instead of the original string to be searched ('Error connecting to database'). Any help is much appreciated! Thanks, Michelle Linked search: sourcetype="AA42127:OQL:bulk" DB-connection error earliest=1579820400 latest=1579906800 Graph panel source: <row> <panel> <title>FMDS Errors</title> <chart> <search> <query>sourcetype="AA42127:OQL:bulk" "(ADDRESS_LIST=(FAILOVER=on)" OR "Failed delivery for" OR "Error connecting to database" OR "Setup of JMS message listener invoker failed for destination" OR "moveFailed" OR "debulkStatusResult: Exception" OR "FileStatus=FAILED" OR "FMT-ERROR" OR "MsgDebulk filename in Error" OR "MsgDebulk generation failed" | timechart span=1d count(eval(match(_raw, "ADDRESS_LIST=*FAILOVER=on"))) as "ADDRESS_LIST_FAILOVER" count(eval(match(_raw, "Failed delivery for"))) as "Failed delivery" count(eval(match(_raw, "Error connecting to database"))) as "DB-connection errror" count(eval(match(_raw, "Setup of JMS message listener invoker failed for destination"))) as "JMS message listener invoker failed" count(eval(match(_raw, "moveFailed"))) as "moveFailed" count(eval(match(_raw, "debulkStatusResult: Exception"))) as "debulkStatusResult: Exception" count(eval(match(_raw, "FileStatus=FAILED"))) as "FAILED FileStatus" count(eval(match(_raw, "FMT-ERROR"))) as "FMT-ERROR" count(eval(match(_raw, "MsgDebulk filename in Error"))) as "MsgDebulk Filename Error" count(eval(match(_raw, "MsgDebulk generation failed"))) as "MsgDebulk Generation Error"</query> <earliest>-30d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">log</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisStart</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">top</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <drilldown> <link target="_blank">search?q=sourcetype="AA42127:OQL:bulk" $click.name2$ earliest=$earliest$ latest=$latest$&amp;earliest=&amp;latest=</link> </drilldown> </chart> </panel> </row>
Hi guys, I have the following exemple: Searching the "s" in Field B delimited by "," , my expected result is the following FIELD A | FIELD B | COUNT x | s,a... See more...
Hi guys, I have the following exemple: Searching the "s" in Field B delimited by "," , my expected result is the following FIELD A | FIELD B | COUNT x | s,a,b,c | 1 y | s,x,x,xs | 2 z | s,a,s,s,s | 4 Thanks for the help
Hi, On Linux Splunk servers, my system admin set this record in remotesyslog.conf . @@syslog-zone40.uth.tmc.edu:1514 Anyone knows what type of logs this setup is send to Splunk HF? (os log... See more...
Hi, On Linux Splunk servers, my system admin set this record in remotesyslog.conf . @@syslog-zone40.uth.tmc.edu:1514 Anyone knows what type of logs this setup is send to Splunk HF? (os log and application log, or anything else) Thank you,
I have a pair of heavy forwarders that is load balanced by a round robin DNS record. I want to set them up as HTTP Event Collectors as described in the documentation: https://docs.splunk.com/D... See more...
I have a pair of heavy forwarders that is load balanced by a round robin DNS record. I want to set them up as HTTP Event Collectors as described in the documentation: https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/ScaleHTTPEventCollector I have enabled the deployment server by setting: useDeploymentServer=1 When I configure my token is now writes to: /opt/splunk/etc/deployment-apps When the token is created on the deployment server it looks like this: [http://openshift] disabled = 0 host = <myDeploymentServerName> index = kubernetes_test sourcetype = kubernetes token = <mytoken> If I push this out my host= will not match either of the two HF's the config is going to. Do I need to push out a separate config for each HF? Can I manually update the host name? Can I put multiple hosts on that line? My second question is: I had to manually change the name of index because the HF's aren't part of the index cluster. Will that impact anything?
Hi, So I currently have a standalone machine agent with server visibility which gives me all the metrics for my certain host. I do see my local volumes under the volume section when i select my host... See more...
Hi, So I currently have a standalone machine agent with server visibility which gives me all the metrics for my certain host. I do see my local volumes under the volume section when i select my host under the servers tab, is there anyway i can also see the volume metrics for the volumes connected or shared to the host through the network.  Thanks!
Hi @all, I'm a little bit helpless at the beginning of SPLUNK. I tried to do simple queries like: Request statuscode and make a timechart with index="name" | timechart count(http_status... See more...
Hi @all, I'm a little bit helpless at the beginning of SPLUNK. I tried to do simple queries like: Request statuscode and make a timechart with index="name" | timechart count(http_status=200) Count pageviews of a specified url index="name" | timechart count (cs_uri_stem) Both commands doesn't work. Can you please help me finding and execute the right commands? Thank you BR Michael
Hello , Any one know about this splunk App configuration please : https://splunkbase.splunk.com/app/4768/ ?
Is Splunk Cisco ISE app and Splunk Cisco ISE Add-on already map to Splunk CIM by default? If not, is there any documentation that we can use to map it and be CIM compliant?