All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all,   I have the Splunk Security Essentials app installed and configured. I am trying to understand how the app determine if a rule has data or not, because there are rules that do have logs b... See more...
Hi all,   I have the Splunk Security Essentials app installed and configured. I am trying to understand how the app determine if a rule has data or not, because there are rules that do have logs but their status is "needs data".   There is the commend sseanalytics, but I am not sure how it works.   Thanks ! 
Hi all, I would like to ask if is possible to monitor mssql transcript logs(DROP, CREATE) without using any apps?
Subjectの通りなのですが、ドロップダウンから複数のトークンを取得する手段をご教示いただきたく。 実現したい内容としては、以下のようなテーブルにおいて、ドロップダウンでName「テスト1」を選択した際、IdとValueをトークンとして利用したいというものです。 Id Name Value A001 テスト1 1000 A002 テスト2 2000... See more...
Subjectの通りなのですが、ドロップダウンから複数のトークンを取得する手段をご教示いただきたく。 実現したい内容としては、以下のようなテーブルにおいて、ドロップダウンでName「テスト1」を選択した際、IdとValueをトークンとして利用したいというものです。 Id Name Value A001 テスト1 1000 A002 テスト2 2000 それぞれ何に用いるかというと、Idは表示対象の絞り込みに、ValueはグラフのY軸の最大値に設定したく。 以下に必要になる箇所のコードをサンプル的に記載させていただきました。 Idを使っての絞り込みは実現出来ているため、 Valueをトークンに設定する方法 Y軸の最大値の設定方法が記載内容で実現できるか(<option name="charting.axisY.maximumNumber">$value$</option> の箇所) をご教示いただけないでしょうか。 よろしくお願いいたします。   <form theme="light"> <label>hogehoge</label> <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="id"> <label>hogehoge</label> <fieldForLabel>Name</fieldForLabel> <fieldForValue>Id</fieldForValue> <search> <query>index="list" | table Id, Name, Value | sort Id </query> <earliest>0</earliest> <latest></latest> </search> </input> </fieldset> <row> <panel> <title>LineChart</title> <chart> <search> <query> ~ 省略 ~ </query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisY.maximumNumber">$value$</option> ~ 省略 ~ </chart> </panel> </row> </form>    
Hello, Is it possible to add customized tokens or values in splunk Alert Manager App?
Hi, Can I separate Trellis visualization by two variables as keys? In other words, I would like a timechart for each combination of the two variables. (variable2_1 for example is an instance in var... See more...
Hi, Can I separate Trellis visualization by two variables as keys? In other words, I would like a timechart for each combination of the two variables. (variable2_1 for example is an instance in variable2 column) My goal is to have:   For now I succeeded either doing:  | stats max(variable1) by variable2, variable3     | stats max(variable1) by variable2, variable3     and the output (one of the Trellis for example): But I wanted a timechart and a separate histogram for each combination of variable 2 and 3. I also tried:     | timechart max(variable1) by variable2, variable3     which doesn't work. Could you kindly assist? the aggregation section in Trellis also doesn't seem to produce the wanted results.  Thanks.
Hi All,   I am building a solution to monitor the windows event logs from about 800 machines using splunk deployment server setup. I am filtering for only 4 event codes using whitelist option (462... See more...
Hi All,   I am building a solution to monitor the windows event logs from about 800 machines using splunk deployment server setup. I am filtering for only 4 event codes using whitelist option (4624,4634,4800,4801). The logs seems to be flowing correctly and i am able to generate reports. However, the issue I am facing is that my disk space is getting filled instantly. About 50 GB for a week of data. I can increase the disk space by 200 GB, but I fear it will be filled in another 2 weeks. Can someone help out how the disk space can be optimized when monitoring the windows event logs for 800 machines.    Thanks, Naagaraj SV
Hi Guys, I need to migrate historical data from Qradar to Splunk platform do you have any suggestions? syslog? dbconnect? Many thanks in advance Alessandro
I have the following output from a search fld1 fld2 fld3 fld4 A               B I                 J                   B      C         D                C E               F                  F  ... See more...
I have the following output from a search fld1 fld2 fld3 fld4 A               B I                 J                   B      C         D                C E               F                  F       G                 J        K        H                G        L                 K   I want the following output: fld1 fld2 fld3 fld4 A               B                   B      C         D                C E               F                   F       G         H                G I                J                  J        K         L                 K   There are always triples of rows to place one after the other by using equality of fld3 or fld4, but these are not always following each other. The order of the rows being part of a triple is always given as in the example   How can I get that?
Resolved
Hi, I experience an issue regarding AppDynamics Lite in that I can no longer access the controller for this past week (https://[Redacted].saas.appdynamics.com ). I keep getting "This site can't be r... See more...
Hi, I experience an issue regarding AppDynamics Lite in that I can no longer access the controller for this past week (https://[Redacted].saas.appdynamics.com ). I keep getting "This site can't be reached" response. What is happening? Is there any way to make it accessible again? For further information, my SaaS trial was activated back on early January this year, and I can access it fine. When the 15-day pro trial ended, it has successfully transitioned into AppDynamics Lite, which to my understanding, based on the information written here https://docs.appdynamics.com/display/PRO45X/Lite+and+Pro+Editions and here https://www.appdynamics.com/pricing/lite, has no time limit. Any feedback would be greatly appreciated, thanks! ^ Edited by @Ryan.Paredez to remove Controller URL. Please do not share your Controller URL on Community posts for security and privacy reasons.
Hello Splunk Community, I have an issue with JSON parsing in Splunk and hope you can help me with that.   Situation: Logs arrive via syslog on our indexers Inside my app I have the following in... See more...
Hello Splunk Community, I have an issue with JSON parsing in Splunk and hope you can help me with that.   Situation: Logs arrive via syslog on our indexers Inside my app I have the following inputs.conf   [monitor:///here_is_the_correct_path] disabled = false host_segment = 3 index = buttercup sourcetype = buttercup:server host =​   This is my props.conf   [buttercup:server] TZ = UTC TIME_FORMAT = %Y-%m-%dT%H:%M:%S DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = buttercup Server Logs pulldown_type = true TRANSFORMS-afilter = setnull, setparsing-audit, setparsing-auth TRANSFORMS-changesourcetype = change-buttercup-server-audit, change-buttercup-server-auth [buttercup:server:audit] TZ = UTC TIME_FORMAT = %Y-%m-%dT%H:%M:%S DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = buttercup Server Auditlog pulldown_type = true SEDCMD-strip_prefix = s/^[^{]+//g [buttercup:server:auth] TZ = UTC TIME_FORMAT = %Y-%m-%dT%H:%M:%S DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = buttercup Server Authenticationlog pulldown_type = true​   And my transforms.conf   [change-buttercup-server-audit] REGEX = buttercup_audit\: FORMAT = sourcetype::buttercup:server:audit DEST_KEY = MetaData:Sourcetype [change-buttercup-server-auth] REGEX = buttercup_auth\: FORMAT = sourcetype::buttercup:server:auth DEST_KEY = MetaData:Sourcetype [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing-audit] REGEX = buttercup_audit\: DEST_KEY = queue FORMAT = indexQueue [setparsing-auth] REGEX = buttercup_auth\: DEST_KEY = queue FORMAT = indexQueue​     Description: After the input to index buttercup and sourcetype buttercup:server I use TRANSFORMS-afilter first, to filter everything from the syslog stream that does not include audit or auth logs. Therefore, I am using the setnull/setparsing construct in transforms.conf After the filtering process, data goes back into the indexQueue and I use TRANSFORMS-changesourcetype to assign the matching “buttercup:server:audit” or “buttercup:server:auth” sourcetype. The filtering and sourcetype assigning processes are successful, which shows me that the construct is working fine   Problem: The problem is that the audit log has a JSON structure which should be parsed by Splunk automatically. To achieve this, I use the SEDCMD for this sourcetype to remove the prefix in front of the JSON structure.  This JSON parsing is working fine, when I do a manual file input and select buttercup:server:audit directly. But this JSON parsing is not working for manual input when I select buttercup:server Therefore, it is also not working for monitor input of the syslog stream (as buttercup:server will be used first)   I think the problem has something to do with the SEDCMD and when it will be handled. Do you have any idea, how to fix that? I was thinking about doing the SEDCMD part within an additional transforms instead but I don´t know how. I have also experimented with adding INDEXED_EXTRACTIONS=JSON and KV_MODE=none (and vise versa) to the sourcetype, but no success.
Hi guys, I am seeing this error on one of my HWF, any clues to fix the issue?   09-05-2021 14:11:21.437 +1000 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... ... See more...
Hi guys, I am seeing this error on one of my HWF, any clues to fix the issue?   09-05-2021 14:11:21.437 +1000 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 09-05-2021 14:10:40.826 +1000 INFO TailReader - Starting batchreader0 thread 09-05-2021 14:10:40.826 +1000 INFO TailReader - tailreader0 waiting to be un-paused 09-05-2021 14:10:40.826 +1000 INFO TailReader - Starting tailreader0 thread 09-05-2021 14:10:40.826 +1000 INFO TailReader - Registering metrics callback for: batchreader0
I have a search which uses email data to search specific email logs for communications from/to specific organization and lists all necessary attributes required. This search is required to run end of... See more...
I have a search which uses email data to search specific email logs for communications from/to specific organization and lists all necessary attributes required. This search is required to run end of every month and generate a report. This scheduled search roughly takes 25 hours to run. index=cisco_esa sourcetype=cisco:esa:textmail mail_logs | transaction internal_message_id maxspan=300s | search recipient="*@abc.com" OR sender="*@abc.com" | table _time internal_message_id sender recipient field2 field3 | outputlookup abc_esa_summary.csv   Can somebody please suggest some improvements to this search to make it run faster ?
The date field sometimes has 2 spaces and sometimes 1 space, depending on whether the date is a single digit or double digit. eg.  May[space][space]9 vs  May[space]10 as a result the field extrac... See more...
The date field sometimes has 2 spaces and sometimes 1 space, depending on whether the date is a single digit or double digit. eg.  May[space][space]9 vs  May[space]10 as a result the field extraction regex finds the wrong field in the first 10 days of the month.  sample regex that splunk comes up with  - ^(?:[^ \n]* ){9}(?P<ResponseTime>\d+) I would have expected this to be a common enough problem but I can't seem to google the answer TIA for your assistance for this regex Newbie    
How can Splunk use the userid returned by idP to do validation of roles based on group mapped to LDAP (Microsoft Active Directory) and successfully integrate the SSO? We're using IBM SAM as the SSO. ... See more...
How can Splunk use the userid returned by idP to do validation of roles based on group mapped to LDAP (Microsoft Active Directory) and successfully integrate the SSO? We're using IBM SAM as the SSO. As of now, the IBM SAM cannot provide the following attributes in an assertion and only userid is being returned: role, realName, and mail
Installed Appd EC and Controller on 192.168.xx.xx, created user miles and API client poc. Post from postman: curl -X POST -H "Content-Type: application/vnd.appd.cntrl+protobuf;v=1" "http://192.168.... See more...
Installed Appd EC and Controller on 192.168.xx.xx, created user miles and API client poc. Post from postman: curl -X POST -H "Content-Type: application/vnd.appd.cntrl+protobuf;v=1" "http://192.168.xx.xx:8090/controller/api/oauth/access_token" -d 'grant_type=client_credentials&client_id=poc@miles&client_secret=1d7d9f28-b9a8-xxxx-xxxx-68e76fc58db7' returns 401.
Hi.  I am trying to edit a source code of a splunk panel such that, the token should only when the user clicks on a particular column (Not a particular value).  For example:  lets consider the bel... See more...
Hi.  I am trying to edit a source code of a splunk panel such that, the token should only when the user clicks on a particular column (Not a particular value).  For example:  lets consider the below table.  Column1 Column2 Column3 Column4 Column5 Column6 Column7 value1 value2 value3 value4 value5 value6 value7 value11 value22 value33 value44 value55 value66 value77   I want to make changes such that, the token should work only when the user clicks any value in Column1 and Column2 (It can be value1 or value11 or value2 or value22 ). The token should not work when the user clicks on any value from other columns.    The opened new panel based on token should disappear when user clicks on any value from columns other than column1 and column2.    The source code which I have :    <row> <panel> <table> <title>Stats by user(Click on a value for further info)</title> <search> <query> SOME BUSINESS SPLUNK QUERY)</query> <earliest>$startTime.earliest$</earliest> <latest>$startTime.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> </search>'<option name="count">20</option> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">cell</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <drilldown> <condition> <set token="value">$click.value2$</set> </condition> </drilldown> </table> </panel> </row> <row depends="$value$"> <panel> <title>Usage of $value$</title> <chart> <search> <query> ANOTHER SPLUNK QUERY</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> <drilldown> <unset token="value"></unset> </drilldown> </chart> </panel> </row>     Please help me to add a condition to unset token based on column. Any help is greatly appreciated.    Thanks in Advance.
I have a directory with about 750 log files. The files are all text files and the total size of this directory is 117 GB. I need to index the files once (not continuously).  Would the best option ... See more...
I have a directory with about 750 log files. The files are all text files and the total size of this directory is 117 GB. I need to index the files once (not continuously).  Would the best option to index all the files in the directory be to copy the files to my Splunk instance and then use the directory input option from Splunk Web i.e. Settings > Data Inputs > Files & Directories ?  Any other recommended options ?
I have a field that consists of data separated from a json  data field using this search. index="test-99" sourcetype="csv"   | eval AuditData_keys = json_keys(AuditData)  ths works perfectly and cre... See more...
I have a field that consists of data separated from a json  data field using this search. index="test-99" sourcetype="csv"   | eval AuditData_keys = json_keys(AuditData)  ths works perfectly and creates the field called AuditData_keys The data in field AuditData_keys in unique based on the values in a field called operations. There are 39 unique values, each with its own unique set of fields.  I'm trying to export each value of the operations field into distinct fields per value. My initial idea was to have individual eventtypes for each operations value. The issue I'm having is what is the best way to extract the fields as they contain similar fields as well as additional fields for each operation value. I came up with this search to create a value for each value in the operations field and its relevant data fields. index="test-99" sourcetype="csv" | eval AuditData_keys = json_keys(AuditData)| table Operations AuditData_keys | dedup AuditData_keys| outputcsv AuditData_extracted_fields_unique.csv Here is a sample of one operation value and its fields. Operation value       Values(fields)  from the AuditData_key UserLoginFailed ["CreationTime","Id","Operation","OrganizationId","RecordType","ResultStatus","UserKey","UserType","Version","Workload","ClientIP","ObjectId","UserId","AzureActiveDirectoryEventType","ExtendedProperties","ModifiedProperties","Actor","ActorContextId","ActorIpAddress","InterSystemsId","IntraSystemId","SupportTicketId","Target","TargetContextId","ApplicationId","DeviceProperties","ErrorNumber","LogonError"]   Short of manually typing the fields for each operation value and using the strings command, there has to be a more efficient way. This is o365 audit data extracted with powershell as a csv file that has embedded json data. Thanks in advance Robert      
I am relatively new to this wonderful tool called SPLUNK. Please excuse me if this question has already been answered. I have event logs from an SFTP . Below is the table from logs Current_Status  ... See more...
I am relatively new to this wonderful tool called SPLUNK. Please excuse me if this question has already been answered. I have event logs from an SFTP . Below is the table from logs Current_Status          Count Delivered                       56415 Pending                          10000 Failed                               200 Error                                 300 My requirement is below : Current_Status          Count Delivered                       56415 Pending                          10000 Others                             500   Please help.. Thank you in advance.