All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, How can we see Jira tickets in splunk, like for service now we have a add-on, that integrates with splunk, and we can view complete incidents, problems etc details. I tried to work... See more...
Hello Splunkers, How can we see Jira tickets in splunk, like for service now we have a add-on, that integrates with splunk, and we can view complete incidents, problems etc details. I tried to work with Jira Issue Collector, Splunk Application, but that seems, to be not working. Strange part is unable to find logs for the same.
I have a field( version) which is available in different position in different events of same sourcetype,Since the prior field(description) to this has irregular count of characters .Due to this is a... See more...
I have a field( version) which is available in different position in different events of same sourcetype,Since the prior field(description) to this has irregular count of characters .Due to this is am seeing null values wherever the field(version) in different positions. I would want fetch the field(version) wherever the field is available in the events of sourcetype.I tried the below , | rex field=_raw "Version=(?<Version>\"\w+\s+\w+\".*?)," | rex mode=sed field=Version "s/\\\"//g" but it didn't worked .Please suggest me a way to fetch this .
Hi I need to show id1,id2 on timechart have table with these columns: index="myindex" | table duration servername id1 id2 duration     Time                                          servername    ... See more...
Hi I need to show id1,id2 on timechart have table with these columns: index="myindex" | table duration servername id1 id2 duration     Time                                          servername      id1   id2 2.643000 2021-22-11 18:30:45 Server1               111 32 2.009000 2021-22-11 18:30:45 Server2               321 72 need to create timechart that show durations by servernames and additional column data id1, id2 Any idea? Thanks
After upgrading Splunk to v8.1.5 and updating the AWS App & Addon we are running in a version conflict with the Python for Scientific Computing App 3.0.0 for MLTK & 1.2 which should run on the same S... See more...
After upgrading Splunk to v8.1.5 and updating the AWS App & Addon we are running in a version conflict with the Python for Scientific Computing App 3.0.0 for MLTK & 1.2 which should run on the same SH. MLTK 5 requires PSC 3 and the AWS App requires PSC 1.2 as documented in https://docs.splunk.com/Documentation/AWS/6.0.3/Installation/Hardwareandsoftwarerequirements MLTK Installed versions of PSC Installed AWS Apps As in the documentation URL above described I renamed the PSC folder and created the app.conf: /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64_awsapp/local/app.conf [package] id = Splunk_SA_Scientific_Python_linux_x86_64_awsapp Now it works for AWS but the MLTK cannot find the installed PSC 3. If I’m changing the directory to ZZZ_Splunk_SA_Scientific_Python_linux_x86_64_awsapp, the MLTK will work, but the AWS cannot find the PSC now. /opt/splunk/etc/apps/ZZZ_Splunk_SA_Scientific_Python_linux_x86_64_awsapp/local/app.conf [package] id = ZZZ_Splunk_SA_Scientific_Python_linux_x86_64_awsapp Is there a way to get this to work or is that an unsupported constellation?
Hi Everyone, I spinned up a new machine(VM) in Azure and trying to install the phantom (SOAR) using RPM file.  So I have partitioned & mounted 700 GB to /Opt and 5 GB to /tmp directory. While inst... See more...
Hi Everyone, I spinned up a new machine(VM) in Azure and trying to install the phantom (SOAR) using RPM file.  So I have partitioned & mounted 700 GB to /Opt and 5 GB to /tmp directory. While installing the package, I am getting below error. Unable to find the answers for this.   Can someone help on this?   Failed to run install for git Error: Package: git-2.16.1-1.el7.x86_64 (phantom-base) Requires: perl(SVN::Ra) Error: Package: git-2.16.1-1.el7.x86_64 (phantom-base) Requires: perl(SVN::Delta) Error: Package: git-2.16.1-1.el7.x86_64 (phantom-base) Requires: perl(SVN::Core)     Thanks, Santhosh Govindhan
Can I configure BREAK_ONLY_BEFORE  with this regex: ##################################################################|(pg-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(ss7-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(s... See more...
Can I configure BREAK_ONLY_BEFORE  with this regex: ##################################################################|(pg-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(ss7-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(ss7-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(da-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(da-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(fs-3 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(fs-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(fs-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(om-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(pg-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(om-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(mms-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(mms-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>) and SHOULD_LINEMERGE to true? My problem is , when I configure this, Splunk automatically added the regex that I have specified in BREAK_ONLY_BEFORE as LINE_BREAKER. So the result is not what I want. I want to keep the regex specified in the event. I do not want the LINE_BREAKER because it will remove the regex specified. Does anyone know what I should do for this?
Hi All, Hoping someone out there can help me unravel the mystery I'm currently facing. We have a KV Store that we use to hold MISP values which is checked against when running various security aler... See more...
Hi All, Hoping someone out there can help me unravel the mystery I'm currently facing. We have a KV Store that we use to hold MISP values which is checked against when running various security alerts. We have 3 searches that are querying MISP data source and based on the results should add any new entries into the KVStore. Basics of the search we run are below:      | misp command to get new records in last 24hrs | bunnch of evals to format data | append [| inputlookup MispKVstore] | dedup | outputlookup append=false MispKVstore     We have this running 3 times to get details for different types of values - but all are stored in the same KVstore. Issue we are having is, once we reach 50 rows in the KV Store, updates are not being made as expected. Each time the search runs, it will add new entries for that category, but seems to delete / discard the values added by the other searches. All column names are consistent between the searches, I have updated the Max_rows_per_query as we thought we might be being affected by the 50k limit, but this has not resolved the issue. Seeking any tips, tricks, troubleshooting advice anyone is able to give to help get this sorted. Thanks in advance
I work in a large environment clustered mostly, have Splunk Ent., ES. SHs & Indexers clustered) There is a maintenance being done & we are told that the indexer will be moved to a new host & data los... See more...
I work in a large environment clustered mostly, have Splunk Ent., ES. SHs & Indexers clustered) There is a maintenance being done & we are told that the indexer will be moved to a new host & data loss will occur. How do I move this indexer out of of the cluster briefly to avoid data loss please? Thanks very  much for your help in advance.
There is a maintenance being performed & we are told that an Index (part of a cluster) is going to be moved to a new host & data loss may occur. How can we take this index out of a cluster so we won'... See more...
There is a maintenance being performed & we are told that an Index (part of a cluster) is going to be moved to a new host & data loss may occur. How can we take this index out of a cluster so we won't  face data loss please? Thank u very much in advance.
I would like to have an alert sent when my syslog server stops sending logs to the Splunk application. Because I am very new to Splunk can I get some examples please. 
Hi Folks, I am facing the issue where I am not able to see red bar in the below panel. The count is for each hour and error count is mostly 1 or 2 events per hour. how can I make red bar visible? ... See more...
Hi Folks, I am facing the issue where I am not able to see red bar in the below panel. The count is for each hour and error count is mostly 1 or 2 events per hour. how can I make red bar visible? Any help or suggestion please?    
This app worked for about a day then started giving us this error: 11-18-2021 06:04:27.982 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/sp... See more...
This app worked for about a day then started giving us this error: 11-18-2021 06:04:27.982 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/splunk/etc/apps/TA-MS_Defender/bin/microsoft_defender_atp_alerts.py" raise ConnectionError(err, request=request) 11-18-2021 06:04:27.982 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/splunk/etc/apps/TA-MS_Defender/bin/microsoft_defender_atp_alerts.py" requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) 11-18-2021 06:04:28.019 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/splunk/etc/apps/TA-MS_Defender/bin/microsoft_defender_atp_alerts.py" ERROR('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))   Any ideas on what would cause this error?
Id=xyz id=ABC id=EDC Id=FIS index=* event=*| eval id = case(id = "xyz" , "one", id = "ABC", "Two")|eval index=case(index="work_prod","PROD",index="work_qa","QA")|table id, index, status |stats co... See more...
Id=xyz id=ABC id=EDC Id=FIS index=* event=*| eval id = case(id = "xyz" , "one", id = "ABC", "Two")|eval index=case(index="work_prod","PROD",index="work_qa","QA")|table id, index, status |stats count(eval(status ="success")) AS Success, count(eval(status ="failure")) AS Failure BY id, index |rename index as Env, id as Application_name I am using above query to get Application name and count of failures and success. Result I am seeing: Application_name Env Success Failure one                              Prod  100   2 Two                             QA      20    10   I have more than 2 id's but since I am eval only two id's  it is giving only two id's as output. How can I get the rest?  Expecting result: Application_name Env Success Failure one                              Prod  100   2 Two                             QA      20    10 EDC                            QA      20    10 FIS                               PROD      20    10    
Hi Folks, I tried to configure the aws add-on on my subscription but I received this error for cloudtrail log. message="Failed to download file" Splunk Version=8.2.0 Input type=SQS-Based S3 Aws ... See more...
Hi Folks, I tried to configure the aws add-on on my subscription but I received this error for cloudtrail log. message="Failed to download file" Splunk Version=8.2.0 Input type=SQS-Based S3 Aws add-on ver= 5.0 any suggestions? any check on policy site in aws consolle?
Hi there, I am new to splunk. I and was wondering how to find the difference in time from the last time a forwarder sent a log until now and if the host has not sent a log in 5 min, set status to of... See more...
Hi there, I am new to splunk. I and was wondering how to find the difference in time from the last time a forwarder sent a log until now and if the host has not sent a log in 5 min, set status to offline? I am trying to achieve  this Expected outcome of time comparison:   Thank you 
I've got a situation that I thought I understood but clearly don't. I have logs that look like this: 2021-11-22 14:00:00 Event=InventoryComplete ComputerName=Server1 ComputerName=Server2 ComputerNam... See more...
I've got a situation that I thought I understood but clearly don't. I have logs that look like this: 2021-11-22 14:00:00 Event=InventoryComplete ComputerName=Server1 ComputerName=Server2 ComputerName=ServerN I thought that ComputerName would automatically be a multivalue field due to there being multiple copies of that Key=Value pair and I'd be able to search any of the values. And I thought there are instances where this works automatically, but it's not right now. | search sourcetype=inventory_audit ComputerName=Server1 ```works``` | search sourcetype=inventory_audit ComputerName=Server2 ```no results``` | search sourcetype=inventory_audit "ComputerName=Server2" ``` forcing text search works``` Is there something I can do to make these events implicitly multivalue? Ideally for the entire sourcetype regardless of the specific field name, as this sourcetype covers a wide variety of audit logs with different object classes.
Hello. I have two indexes and three users.  Each user is in specific AD group.  Each group is mapped to a respective role.  Each role gives access to a specific index(es). user_a can only search in... See more...
Hello. I have two indexes and three users.  Each user is in specific AD group.  Each group is mapped to a respective role.  Each role gives access to a specific index(es). user_a can only search index_a, user_b only index_b and user_c can search both.  Restricting access to indexes is important. Indexes index_a index_b AD group user_idx_a (user_a is a member) user_idx_b (user_b is a member) user_idx_all (user_c is a member) Users and roles user_a has role role_a user_b has role role_b user_c has role role_c authorize.conf [role_role_a] importRoles = user srchIndexesDefault = index_a srchIndexesDisallowed = index_b [role_role_b] importRoles = user srchIndexesDefault = index_b srchIndexesDisallowed = inbex_a [role_role_c] importRoles = user That's all fine. But as more indexes are added, I wonder how this will scale, especially where, for example, user_d needs access to newly created index_d, plus say index_a.  I will now need a new AD group (user_idx_d), a new role (role_role_a_d) and a suitable entry in authorize.conf.  I've gained some mileage from putting index restrictions on the inherited (user) role.  For example: [role_user] srchIndexesDisallowed = main;splunklogger;summary I had thought I would put users in multiple AD groups, but whilst membership brings a new role / index, it also means I end up with conflicting 'disallowed' directives. Is there a better way?  Or have I reduced the administration to the minimum whilst maintaining index access granularity? Many thanks.
I want to add users to splunk via a DL and they need to be assigned with roles.
Good morning. I support a application development effort which is transitioning from Elasticsearch to Splunk.  I would like to setup a POC test instance/cluster of splunk on our dev network. With e... See more...
Good morning. I support a application development effort which is transitioning from Elasticsearch to Splunk.  I would like to setup a POC test instance/cluster of splunk on our dev network. With elastic, I would dimply download an RPM ang get started. With splunk, it is unclear to me how to get started (reading docs), regarding licensing and which files I can download. Apologies for the low level questions, but where can I get started?  Which file can I download to start install an instance, and hopefully created a small (3/4 node) cluster for POC? Thanks, Larry
Hi all, I have the following problem set: I have an index that rolls out data every 30 days (ie data older than 30 days is removed). There is a subset of data from this index that I would like to q... See more...
Hi all, I have the following problem set: I have an index that rolls out data every 30 days (ie data older than 30 days is removed). There is a subset of data from this index that I would like to query for a longer period of time, say 12 or 24 months.  I'm fairly new to the idea of summary indexes, but it sounds like the logical solution. However, I'm concerned about losing previous data (that's been removed from the original index) each time the summary index is scheduled to run. Is there a way for a summary index to store the data from old runs so I can build a dataset that encompasses multiple months from the original index?    Thanks in advance!