All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I want to copy the ag-grid JS functions data in CDN  and access it into my Dashboard XML.  I copied all the functions into a New Js file and tried adding the js file in xml through <script src=... See more...
Hi, I want to copy the ag-grid JS functions data in CDN  and access it into my Dashboard XML.  I copied all the functions into a New Js file and tried adding the js file in xml through <script src=>, but it's not reading the JS file. I have tried to check for any single function from the console but it's not reading the data. Is this right approach to copy data from CDN into Js file and access through XML? If not please suggest the better approach.  
I have a lookup | inputlookup citizen_data , it has fields ID, Name, State. I have another sourcetype | index=bayseian souretype=herc , that has fields citizen_ID, mobile, email. My target is to ... See more...
I have a lookup | inputlookup citizen_data , it has fields ID, Name, State. I have another sourcetype | index=bayseian souretype=herc , that has fields citizen_ID, mobile, email. My target is to enrich the "citizen_data" lookup with additional columns so that, while doing  |inputlookup citizen_data, I should see ID, Name, State, Mobile, Email. NOTE :  ID field in the lookup is same as citizen_ID field in the sourcetype and I wanted the appendcol to link properly as per the matching ID data. But when I executed my query with appendcol, it does append new columns/fields in the lookup, but it doesn't link or match them comparing  the common field i.e.  the ID. It just randomly appends the new coloumns. The rows contain incorrect data. Any suggestion  how to append properly by  the common field (ID/citizen_ID)?
what should the best regex to catch it up these 3 diff fields    -ec-1 -ec-01 -ec01
Suppose I have A and B query..both have success and failure...so I want both A and B success at one bar with different colour and failure as separate bar with different color
Can anyone help on cron expression Query runs every 15min from 8:15am to 6pm Monday to Friday
Hello Splunkers, I have 2 panel and 1 Text Box filter in my splunk dashboard. I want to hide and show the panels depending on text box values. Example, if text box is empty, then panel A should on... See more...
Hello Splunkers, I have 2 panel and 1 Text Box filter in my splunk dashboard. I want to hide and show the panels depending on text box values. Example, if text box is empty, then panel A should only be visible. if text box is having some value, then panel b should only be visible. Default is textbox is empty, hence only Panel A should be visible.
Hello, we are forwarding Logs from a host via universal forwarder. As the universal forwarder is not able to filter events(logs we went for adjusting tarnsforms.conf and props.conf After editing ... See more...
Hello, we are forwarding Logs from a host via universal forwarder. As the universal forwarder is not able to filter events(logs we went for adjusting tarnsforms.conf and props.conf After editing those files we indeed only ingested the expected and desired logs according to the RegEx in transforms. However the indexed volume stayed the same. So i tried to send all events to the nullqueue and check the indexed volume again. For some reason even with zero events the query for indexed volume still is very high. Here the snippets from the relevent files and queries: 1. search query for getting indexed volume: index="_internal" source="*metrics.log" per_index_thruput series=<my index> | eval GB=kb/(1024*1024) | timechart span=2min partial=f sum(GB) by series 2. rather boring one => the search to check on event count index=<my index> | stats count 3. stanza in transforms.conf (to kill all events for testing) [<my transformation>] REGEX = . DEST_KEY = queue FORMAT = nullQueue 4. stanza in props.conf for sourcetype [<my sourcetype>] TRANSFORMS-setnull = <my transformation> ------------------------------------------------------------------------ I also tried with TRANSFORMS-set...no idea what the difference between the two is, but that doesn't work as well. So the nullqueue is working as i have no events in the index, however the query for indexing volume is off the charts. Any help would be apriciated. Thanks, Mike  
Hi Team, We need to integrate Splunk with our clients' AWS account's S3 bucket. However, it's a concern for the client to open ListBuckets permission to us. Is it possible for AWS Add-On to work wit... See more...
Hi Team, We need to integrate Splunk with our clients' AWS account's S3 bucket. However, it's a concern for the client to open ListBuckets permission to us. Is it possible for AWS Add-On to work without ListBuckets permission?
Hi there,  I am trying to implement a use case where I have an API that keeps sending partial results (around 50-100) until all the results from the API are done.  I have implemented a Generatin... See more...
Hi there,  I am trying to implement a use case where I have an API that keeps sending partial results (around 50-100) until all the results from the API are done.  I have implemented a GeneratingCommand for it, and it returns correct results.  However, I have to wait for quite some time, because Splunk returns results only when all the results from API are collected in Splunk.  The use case I want: I do not wish to wait for all results, but I want to have the partial results returned in Splunk as soon as they are returned from the API - so I do not have to wait. I have tried: 1) adding limits.conf 2) using chunked=True 3) editing maxresultrows and maxresults  4) using flush() results  5) converting to streaming command and using above steps  But nothing seems to work.  Please help, any help would be really appreciated.                
Hi everyone, i got two URLs which i want to represent in one regex group. The dest Port (443) will be in a seperate group Here are two examples. my.url.is.here:443 http://myurl.de/tasks/searc... See more...
Hi everyone, i got two URLs which i want to represent in one regex group. The dest Port (443) will be in a seperate group Here are two examples. my.url.is.here:443 http://myurl.de/tasks/search/home?   When i use the following regex "(?<url>[^\s:]+):?" the first example is fine, but the second only catches "http" because it only matches till the ":" Can someone help and fix my regex? Thanks.
Hello Splunkers, How can we see Jira tickets in splunk, like for service now we have a add-on, that integrates with splunk, and we can view complete incidents, problems etc details. I tried to work... See more...
Hello Splunkers, How can we see Jira tickets in splunk, like for service now we have a add-on, that integrates with splunk, and we can view complete incidents, problems etc details. I tried to work with Jira Issue Collector, Splunk Application, but that seems, to be not working. Strange part is unable to find logs for the same.
I have a field( version) which is available in different position in different events of same sourcetype,Since the prior field(description) to this has irregular count of characters .Due to this is a... See more...
I have a field( version) which is available in different position in different events of same sourcetype,Since the prior field(description) to this has irregular count of characters .Due to this is am seeing null values wherever the field(version) in different positions. I would want fetch the field(version) wherever the field is available in the events of sourcetype.I tried the below , | rex field=_raw "Version=(?<Version>\"\w+\s+\w+\".*?)," | rex mode=sed field=Version "s/\\\"//g" but it didn't worked .Please suggest me a way to fetch this .
Hi I need to show id1,id2 on timechart have table with these columns: index="myindex" | table duration servername id1 id2 duration     Time                                          servername    ... See more...
Hi I need to show id1,id2 on timechart have table with these columns: index="myindex" | table duration servername id1 id2 duration     Time                                          servername      id1   id2 2.643000 2021-22-11 18:30:45 Server1               111 32 2.009000 2021-22-11 18:30:45 Server2               321 72 need to create timechart that show durations by servernames and additional column data id1, id2 Any idea? Thanks
After upgrading Splunk to v8.1.5 and updating the AWS App & Addon we are running in a version conflict with the Python for Scientific Computing App 3.0.0 for MLTK & 1.2 which should run on the same S... See more...
After upgrading Splunk to v8.1.5 and updating the AWS App & Addon we are running in a version conflict with the Python for Scientific Computing App 3.0.0 for MLTK & 1.2 which should run on the same SH. MLTK 5 requires PSC 3 and the AWS App requires PSC 1.2 as documented in https://docs.splunk.com/Documentation/AWS/6.0.3/Installation/Hardwareandsoftwarerequirements MLTK Installed versions of PSC Installed AWS Apps As in the documentation URL above described I renamed the PSC folder and created the app.conf: /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64_awsapp/local/app.conf [package] id = Splunk_SA_Scientific_Python_linux_x86_64_awsapp Now it works for AWS but the MLTK cannot find the installed PSC 3. If I’m changing the directory to ZZZ_Splunk_SA_Scientific_Python_linux_x86_64_awsapp, the MLTK will work, but the AWS cannot find the PSC now. /opt/splunk/etc/apps/ZZZ_Splunk_SA_Scientific_Python_linux_x86_64_awsapp/local/app.conf [package] id = ZZZ_Splunk_SA_Scientific_Python_linux_x86_64_awsapp Is there a way to get this to work or is that an unsupported constellation?
Hi Everyone, I spinned up a new machine(VM) in Azure and trying to install the phantom (SOAR) using RPM file.  So I have partitioned & mounted 700 GB to /Opt and 5 GB to /tmp directory. While inst... See more...
Hi Everyone, I spinned up a new machine(VM) in Azure and trying to install the phantom (SOAR) using RPM file.  So I have partitioned & mounted 700 GB to /Opt and 5 GB to /tmp directory. While installing the package, I am getting below error. Unable to find the answers for this.   Can someone help on this?   Failed to run install for git Error: Package: git-2.16.1-1.el7.x86_64 (phantom-base) Requires: perl(SVN::Ra) Error: Package: git-2.16.1-1.el7.x86_64 (phantom-base) Requires: perl(SVN::Delta) Error: Package: git-2.16.1-1.el7.x86_64 (phantom-base) Requires: perl(SVN::Core)     Thanks, Santhosh Govindhan
Can I configure BREAK_ONLY_BEFORE  with this regex: ##################################################################|(pg-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(ss7-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(s... See more...
Can I configure BREAK_ONLY_BEFORE  with this regex: ##################################################################|(pg-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(ss7-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(ss7-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(da-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(da-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(fs-3 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(fs-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(fs-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(om-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(pg-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(om-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(mms-1 \| [a-zA-Z0-9._%-]* \| rc=0 >>)|(mms-2 \| [a-zA-Z0-9._%-]* \| rc=0 >>) and SHOULD_LINEMERGE to true? My problem is , when I configure this, Splunk automatically added the regex that I have specified in BREAK_ONLY_BEFORE as LINE_BREAKER. So the result is not what I want. I want to keep the regex specified in the event. I do not want the LINE_BREAKER because it will remove the regex specified. Does anyone know what I should do for this?
Hi All, Hoping someone out there can help me unravel the mystery I'm currently facing. We have a KV Store that we use to hold MISP values which is checked against when running various security aler... See more...
Hi All, Hoping someone out there can help me unravel the mystery I'm currently facing. We have a KV Store that we use to hold MISP values which is checked against when running various security alerts. We have 3 searches that are querying MISP data source and based on the results should add any new entries into the KVStore. Basics of the search we run are below:      | misp command to get new records in last 24hrs | bunnch of evals to format data | append [| inputlookup MispKVstore] | dedup | outputlookup append=false MispKVstore     We have this running 3 times to get details for different types of values - but all are stored in the same KVstore. Issue we are having is, once we reach 50 rows in the KV Store, updates are not being made as expected. Each time the search runs, it will add new entries for that category, but seems to delete / discard the values added by the other searches. All column names are consistent between the searches, I have updated the Max_rows_per_query as we thought we might be being affected by the 50k limit, but this has not resolved the issue. Seeking any tips, tricks, troubleshooting advice anyone is able to give to help get this sorted. Thanks in advance
I work in a large environment clustered mostly, have Splunk Ent., ES. SHs & Indexers clustered) There is a maintenance being done & we are told that the indexer will be moved to a new host & data los... See more...
I work in a large environment clustered mostly, have Splunk Ent., ES. SHs & Indexers clustered) There is a maintenance being done & we are told that the indexer will be moved to a new host & data loss will occur. How do I move this indexer out of of the cluster briefly to avoid data loss please? Thanks very  much for your help in advance.
There is a maintenance being performed & we are told that an Index (part of a cluster) is going to be moved to a new host & data loss may occur. How can we take this index out of a cluster so we won'... See more...
There is a maintenance being performed & we are told that an Index (part of a cluster) is going to be moved to a new host & data loss may occur. How can we take this index out of a cluster so we won't  face data loss please? Thank u very much in advance.
I would like to have an alert sent when my syslog server stops sending logs to the Splunk application. Because I am very new to Splunk can I get some examples please.