All Topics

Top

All Topics

Hi there, is there any kind of limitation in the TA-elasticsearch-data-integrator app? We currently face the problem that the we ingest just a small amount of data from the elastic cluster itself... See more...
Hi there, is there any kind of limitation in the TA-elasticsearch-data-integrator app? We currently face the problem that the we ingest just a small amount of data from the elastic cluster itself. For me it looks a kind of a limitation (snapshot attached). We got 3 Heavys in place running on Splunk 9.0.5.   Thanks and best regards. Brenny  
Hi all, I just upgraded splunk enterprise from 8.1.2 to 8.2.6.1 And I found some of big searches return below message when I run them   "Error in 'SearchPipeline': The pipeline size for this sear... See more...
Hi all, I just upgraded splunk enterprise from 8.1.2 to 8.2.6.1 And I found some of big searches return below message when I run them   "Error in 'SearchPipeline': The pipeline size for this search exceeds a search command limit : 340"   I've never seen this message on 8.1.2 before Could you please guide me 'which conf file stanza should be modified to increase pipeline limit?'
I've recently moved from an on-prem Splunk SOAR to the SaaS-based SOAR Cloud and am wondering if there's an equivalent to delete_containers.py script for Cloud?  I'm aware we can't run bespoke scrip... See more...
I've recently moved from an on-prem Splunk SOAR to the SaaS-based SOAR Cloud and am wondering if there's an equivalent to delete_containers.py script for Cloud?  I'm aware we can't run bespoke scripts on Cloud which is OK, but I haven't been able to manage container numbers besides running manual deletion, which is time consuming. Thanks!
I am creating a 2nd SH cluster (with its own Deployer), and both SH clusters (old and new) will be accessing a single (multi-site) IDX cluster. The existing SH cluster is using Site 0, and the index... See more...
I am creating a 2nd SH cluster (with its own Deployer), and both SH clusters (old and new) will be accessing a single (multi-site) IDX cluster. The existing SH cluster is using Site 0, and the indexers are using site 1 and 2. Do I need to configure the new SH cluster with a unique site id, eg. Site 4, or does not not matter? There seems to be about zero docs for multiple SH clustering. Thanks.  
Hi there, need a bit of help here.  Context:  Our organisation recently changed the `index` thus we need to update all queries to search against the new index after an exact date.  Our current ... See more...
Hi there, need a bit of help here.  Context:  Our organisation recently changed the `index` thus we need to update all queries to search against the new index after an exact date.  Our current solution is to create a duplicated dashboard, and use the new index in all queries.  I was wondering if there are better ways to dynamically update the value of `index` based on different time span.    Task: Is there a way to dynamically update index value based on a time span selected? Like we'd like to use value of 'some_index_1' before 20th July 2023; and use value of 'some_index_2' after 20th July 2023.  Current query template: index=some_index_1 cf_org_name=my_org_name cf_app_name=some_appName_1 message_type=OUT | search "Submit succesfull" | stats count   Thanks in advance.
hello engineers good afternoon I have a problem I hope you can help me to solve it. How can I do to validate if the information of two fields of an index exist in a lookup table ? I need to cre... See more...
hello engineers good afternoon I have a problem I hope you can help me to solve it. How can I do to validate if the information of two fields of an index exist in a lookup table ? I need to create two lookup files ? I was thinking to unite in the same column the information of the two fields and that to consult it to the lookup table.
NOOB here... trying to install SOAR.  After all installation I try the IP:9999 and I still can not access web gui.  What I'm missing.  
Hi, I am new to SIEM products. Does it make sense to sent all logs to Graylog first and from there to eg. Splunk or OSSIN? Or is it better to directly forward logs from the endpoints to SIEM?
Hello @splunk  Team, I'm not able play any of the Splunk e-learning modules as of 29/07/2023. The video player keeps getting stuck and the modules get auto completed even without watching it. I'm a... See more...
Hello @splunk  Team, I'm not able play any of the Splunk e-learning modules as of 29/07/2023. The video player keeps getting stuck and the modules get auto completed even without watching it. I'm also not able to re-launch the modules that I have watched. Requesting @splunk to please help rectify this issue asap.
Hi Team, I am getting below raw logs: 2023-07-29 10:39:52.949 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSe... See more...
Hi Team, I am getting below raw logs: 2023-07-29 10:39:52.949 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=19020051, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busDt=07/28/2023, fileName=SETTLEMENT_TRANSFORM_MERGE, totalAchCurrOutstBalAmt=0.0, totalAchBalLastStmtAmt=0.0, totalClosingBal=7.100761644428E10, sourceName=null, version=1, associationStats={}] ---- controlFileData: ControlFileData [fileName=SETTLEMENT_TRANSFORM_ASSOCIATION, busDate=07/28/2023, fileSequenceNum=0, totalBalanceLastStmt=0.0, totalCurrentOutstBal=0.0, totalRecordsWritten=19020051, totalRecords=0, totalClosingBal=7.100761644428E10] I want to fetch the highlighted information the query I am trying is below: index="600000304_d_gridgain_idx*" sourcetype=600000304_gg_abs_ipc2 sourcetype = "600000304_gg_abs_ipc2" " AssociationProcessor* associationStats={}] ---- controlFileData:ControlFileData " source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" |rex " AssociationProcessor* associationStats={}] ---- controlFileData:ControlFileData busDt=(?<busDt>),fileName=(?<fileName>),totalClosingBal=(?<totalClosingBal>)"|table _time  busDt fileName totalClosingBal|sort _time But I am getting this file name in my statistics "fileName=SETTLEMENT_TRANSFORM_MERGE"  rather I want the one inside Association Stats "SETTLEMENT_TRANSFORM_ASSOCIATION" Can someone gu
Hey Everyone!  New to this community, was just wondering whether we can use AppDynamics to monitor the performance of applications on our PC such as Microsoft Word/Excel and other accounting related... See more...
Hey Everyone!  New to this community, was just wondering whether we can use AppDynamics to monitor the performance of applications on our PC such as Microsoft Word/Excel and other accounting related Apps/Softwares such as Climax etc. Thanks for your time.
Hi ,  I am trying to extract aggregated errors from json message log coming from splunk event and categorising them basis on status code, status title , and error description. I am unable to extra... See more...
Hi ,  I am trying to extract aggregated errors from json message log coming from splunk event and categorising them basis on status code, status title , and error description. I am unable to extract all fields under same search as field name for status code and status title stands same.  Current Query_1: | rex field=message "errorStatus\":\{\"status\":(?<status>[0-9]+)," | stats count by status Current Output_1: Status Count 404 10 422 20 500 30 Current Query_2: | rex field=message "title\":\"(?<title>[^\"]+)" | rex field=message "status\":\"(?<status>[^\"]+)" | spath input=title | spath input=status | stats count by status, title Current Output_2: Status Title Count Service_A_Failed Site error 10 Service_B_Failed User Error 20 Service_C_Failed Infra Error 30 Expected Output: want to merge above both outputs in single query. Status Code Component_Status Title Count 404 Service_A_Failed Site error 10 422 Service_B_Failed User Error 20 500 Service_C_Failed Infra Error 30
Hello, We have a Splunk Cloud DEV environment and trying to upload some cyber security related mock data to test some detection logic. The mock data available in Splunk tutorial page (https://docs.s... See more...
Hello, We have a Splunk Cloud DEV environment and trying to upload some cyber security related mock data to test some detection logic. The mock data available in Splunk tutorial page (https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/SearchTutorial/Systemrequirements#Download_the_tutorial_data_files) is related with sales and prices which do not help. Some other mock data I found on GitHub pages are usually for Splunk Enterprise or on Premise which the data needs to be uploaded through reaching out the backend so it is not possible to upload BOTS or some other available data I found to Splunk Cloud. I would appreciate if anyone can help with this. Thank you. 
I am retrieving operation details like operation name, total time etc from json message log coming as a part of splunk search event.  I want to have a custom name of the operation which was extracted... See more...
I am retrieving operation details like operation name, total time etc from json message log coming as a part of splunk search event.  I want to have a custom name of the operation which was extracted from json data. Current Result:- Operation Total time PREDICT: A1: B1: C1 100 PREDICT: A2: B2: C2 200 PREDICT: A3: B3: C3 300 PREDICT: A4: B4: C4 400 Expected Result:- Operation Total time Service_A1 100 Service_A2 200 Service_A3 300 Service_A4 400
Hello Everyone, I have a requirement in which I need to create a dashboard, which has separate panels for each sale category it is based up sale _type selected.  This chart should show up the time... See more...
Hello Everyone, I have a requirement in which I need to create a dashboard, which has separate panels for each sale category it is based up sale _type selected.  This chart should show up the time stamp and the time taken for the transaction call for that sale_category. For that I need one static dropdown which lists the sale_type and second one which comes up with categories dynamically upon selection of sale item. also, each category should have a separate panel which shows up the time and the duration of each call made. The table contains columns (Time_Stamp, Sale_type, Sale_Category, Min_value, Max_value, Mean_Value, count_calls) example: Sale_type => (Refreshments, Dinner, Lunch, Breakfast) Sale_Category=> Refreshments - (Juice / Milkshake / lassi) Dinner - (Pasta, chicken wings) Lunch - (Fried rice, Pizza) Breakfast - (Toast, Donut, Bagels)  So far, I'm using this. | dbxquery shortnames=t connection=xxx query="select sale_type,sale_category, avg(execmin) as min,avg(execmean) as mean, avg(execmax)as max from (Select sale_type,sale_category,execmin,execmax,execmean,object from XXX) group by sale_type,sale_category" | search sale_type=$saletype$ Thank you.
I have an On-Prem deployment Server and an AIX Server with a UF. I have a log monitor that redirects and overwrites an out file every minute. example: -rw-r--r-- 1 root system 27804 Jul 27 17:32 /u... See more...
I have an On-Prem deployment Server and an AIX Server with a UF. I have a log monitor that redirects and overwrites an out file every minute. example: -rw-r--r-- 1 root system 27804 Jul 27 17:32 /usr/local/bin/reports/mysycpost_check.out -rw-r--r-- 1 root system 27804 Jul 27 17:33 /usr/local/bin/reports/mysycpost_check.out -rw-r--r-- 1 root system 27804 Jul 27 17:34 /usr/local/bin/reports/mysycpost_check.out   The contents of this file contains 41 lines every time its overwritten but contain different values each time: SYM000 19727072 23724770 0 - 0:28 SYCPOST SYC000 /SYM/SYM000 SYM000 22807268 23724770 0 - 0:17 SYCPOST SYC000 /SYM/SYM000 SYM000 23200462 23724770 0 - 0:08 SYCPOST SYC000 /SYM/SYM000 SYM000 23266014 23724770 0 - 0:14 SYCPOST SYC000 /SYM/SYM000 SYM000 23659042 23724770 0 - 0:11 SYCPOST SYC000 /SYM/SYM000 SYM000 23855850 23724770 0 - 0:35 SYCPOST SYC000 /SYM/SYM000 SYM000 24576546 23724770 0 - 0:43 SYCPOST SYC000 /SYM/SYM000 SYM000 24838656 23724770 0 - 0:06 SYCPOST SYC000 /SYM/SYM000 SYM000 24904198 23724770 0 - 0:09 SYCPOST SYC000 /SYM/SYM000 SYM000 24969758 23724770 0 - 0:22 SYCPOST SYC000 /SYM/SYM000 SYM000 25035266 23724770 0 - 0:56 SYCPOST SYC000 /SYM/SYM000 SYM000 25100802 23724770 0 - 0:06 SYCPOST SYC000 /SYM/SYM000 SYM000 25166340 23724770 0 - 0:05 SYCPOST SYC000 /SYM/SYM000 SYM000 25231878 23724770 0 - 0:04 SYCPOST SYC000 /SYM/SYM000 SYM000 25362954 23724770 0 - 0:04 SYCPOST SYC000 /SYM/SYM000 SYM000 25428492 23724770 0 - 0:03 SYCPOST SYC000 /SYM/SYM000 SYM000 25494030 23724770 0 - 0:03 SYCPOST SYC000 /SYM/SYM000 (41 lines) As of right now the timestamp is coming from the value: "0:28 " at the top of the file which makes it 12:28am. So all my events are using this value for time. This is incorrect. I want the file to be one event and get the timestamp of the actual time of the log when its written: Jul 27 17:32 Jul 27 17:33 Jul 27 17:34 and so on....   Here is the inputs .conf and my props.conf (which is side by side on my Deplyment Server: /opt/splunk/etc/deployment-apps/cu-infrastructure-xxx/local [root@deployment_server local]# ll total 12 -rw-------. 1 splunk splunk 21 Dec 30 2020 app.conf -rw-rw-r--. 1 splunk splunk 1326 Jul 27 14:20 inputs.conf -rw-r--r--. 1 splunk splunk 115 Jul 27 16:16 props.conf Props.conf [sycpost] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8   Inputs.conf ### symitar SYCPOST utilization logs [monitor:///usr/local/bin/reports/mysycpost_check.out] disabled = false index = cu-infrastructure-xxx sourcetype = sycpost   Question is Do I have my Props.conf in the right location? Deployment Server? Does it need to by on my Cloud indexers? IS the Props correct? IM trying to have the log determine the time of the event.          
I'm trying to run a Python script as part of an Adaptive Response Action.  In Splunk ES, I go to Enterprise Security > Configure > Content > Content Management > Correlation Search . Under Correla... See more...
I'm trying to run a Python script as part of an Adaptive Response Action.  In Splunk ES, I go to Enterprise Security > Configure > Content > Content Management > Correlation Search . Under Correlation Search, I added Adaptive Response Actions and selected Run a Script (I was initially told to use Webhook; however I wasn’t able to pass arguments from code—just a parameter for an URL) placed a copy of the Python script that contain the POST request and some exception handling in $Splunk_Home/bin/scripts. For the Trigger Condition, I selected custom as I wanted to launch the action on demand; however, I’m not sure what parameters to use for this.   I tried to find documentation to no avail.  Could someone please advise.  Thank you
Hello Splunkers , I have created a script and places in        <splunk_home>/etc/apps/search/bin/seq.py         Below is the script for it       import splunklib.client as client # Splu... See more...
Hello Splunkers , I have created a script and places in        <splunk_home>/etc/apps/search/bin/seq.py         Below is the script for it       import splunklib.client as client # Splunk connection details HOST = "localhost" PORT = 8089 USERNAME = "admin" PASSWORD = "changeme" # Create a Splunk service instance service = client.connect( host=HOST, port=PORT, username=USERNAME, password=PASSWORD ) # List of specific saved searches to run in sequence saved_searches_to_run = ['List of Indexes', 'List of Source Types', 'List of Sources'] # Function to run a saved search def run_saved_search(saved_search_name): saved_search = service.saved_searches[saved_search_name] job = saved_search.dispatch() while not job.is_done(): pass # Wait for the job to complete # Process the search results here results = job.results() # Print the raw search results print(f"Search results for {saved_search_name}:") for result in results: print(result) print() # Run the specific saved searches in sequence for saved_search_name in saved_searches_to_run: print("Running saved search:", saved_search_name) run_saved_search(saved_search_name) print("Completed saved search:", saved_search_name) print()       I places the command  <splunk_home>/etc/apps/search/local/commands.conf [seq] filename=seq.py But when I ran the command in splunk as        |seq         It returns error code External search command 'seq' returned error code 1. . This sample code works correctly with |test   import sys import splunk.Intersplunk # Read parameters name_prefix = sys.argv[1] # Output data should be a list of dictionary like this data = [{'name': 'xyz', 'age': 23}, {'name': 'abc', 'age': 24}] # Corrected the syntax for record in data: record['name'] = name_prefix + record['name'] # Use the `outputResults` function from `splunk.Intersplunk` to send the data back to Splunk splunk.Intersplunk.outputResults(data)   Should splunk SDK be installed? This is a single instance splunk 
I m having a hard time trying to extract a string from a field from a splunk search using splunk regex , can someone help pls ?  The field looks like client_info=xxx-yyy=aaaa-bbb-cccc::4.144.1::we... See more...
I m having a hard time trying to extract a string from a field from a splunk search using splunk regex , can someone help pls ?  The field looks like client_info=xxx-yyy=aaaa-bbb-cccc::4.144.1::web-app-id::plugin-id I just want the string web-app-id and plugin-id extracted in separate fields named WebApp and Plugin.  Appreciate any help on this , thanks in advance ! 
Hello,  I have a PowerShell script that parses emails and pulls out specific header data that I want in Splunk. While writing the script I decided to have it output json as I thought that would be a... See more...
Hello,  I have a PowerShell script that parses emails and pulls out specific header data that I want in Splunk. While writing the script I decided to have it output json as I thought that would be a good option to feed to splunk. I produced a sample json log file (one line json per message I want parsed) and setup a sourcetype via the interactive add data wizard. I then added that sourcetype to my app's props.conf.  My issue is I cannot seem to find the right way to get splunk to execute the powershell script. I've tried script:// with the ps1, with a .path file, and recently tried powershell:// with a script parameter. Nothing seems to be working.  Any guidance on how to make this would be great. I don't want to have to resort to a scheduled task running the script which outputs to a log file that splunk monitors, but I can do that if I need to.  Here is my inputs.conf that I tried:   [script://$SPLUNK_HOME/etc/apps/phishalert/bin/phishalert_output.ps1] disabled = 1 interval = 300 index = email source = phishalert sourcetype = phishalert [script://$SPLUNK_HOME/etc/apps/phishalert/bin/phishalert_output.path] disabled = 1 interval = 300 index = email source = phishalert sourcetype = phishalert [powershell://PhishAlertOutput] disabled = 1 script = . "$SPLUNKHOME/etc/apps/phishalert/bin/phishalert_output.ps1" schedule = */5 * * * * sourcetype = phishalert   Here is the props.conf: [phishalert] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) TIMESTAMP_FIELDS = timestamp category = Structured description = Phish alert json data. disabled = false pulldown_type = true