All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Folks,    Can anyone please help me with a script that help to change splunk admin password across 100 servers? It should prompt what password to change with.   Thanks in advance 
A page about the .NET agent says: "The AppDynamics .NET Agent includes an embedded .NET Machine Agent that runs as part of the AppDynamics.Agent.Coordinator service. Among other things, the Machine ... See more...
A page about the .NET agent says: "The AppDynamics .NET Agent includes an embedded .NET Machine Agent that runs as part of the AppDynamics.Agent.Coordinator service. Among other things, the Machine Agent regularly gathers system performance data and reports it back to the Controller as metrics." To simplify our deployment I've uninstalled the standalone (java) machine agent from one of our servers in favour of the "embedded .NET Machine Agent" mentioned above, but now there's no machine level metrics coming through at all. I found the following in AgentLog.txt: 2021-08-03 14:44:00.0257 11408 AppDynamics.Coordinator 1 9 Info RegistrationChannel Auto agent registration attempted: Application Name [My application name] Component Name [Machine Agent] Node Name [My node name] 2021-08-03 14:44:00.0257 11408 AppDynamics.Coordinator 1 9 Info RegistrationChannel Auto agent registration SUCCEEDED! 2021-08-03 14:44:00.0257 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager Registered machine/collector agent with machine ID [707] 2021-08-03 14:44:00.0257 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager Starting Machine Agent .... 2021-08-03 14:44:00.0257 11408 AppDynamics.Coordinator 1 9 Info ControllerTimeSkewHandler Skew Handler is : [enabled]. 2021-08-03 14:44:00.0569 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager Metrics Initialized with maxPublishQueueLength [5], aggregationFrequencyInMillis [60000] 2021-08-03 14:44:00.0569 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager Metrics Metric Service is : [enabled]. 2021-08-03 14:44:00.0725 11408 AppDynamics.Coordinator 1 9 Info IISMetricManager Started IIS metric collection 2021-08-03 14:44:00.0725 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager Scheduling metric polling period of 60 with cache timeout of 1 2021-08-03 14:44:00.0882 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager starts EventLog listener 2021-08-03 14:44:00.1038 11408 AppDynamics.Coordinator 1 9 Info EventLogListener added EventLog for listening: [Application] 2021-08-03 14:44:00.1038 11408 AppDynamics.Coordinator 1 9 Info EventLogListener added EventLog for listening: [System] 2021-08-03 14:44:00.1038 11408 AppDynamics.Coordinator 1 9 Info EventLogListener added a new rule for monitoring .NET crash events: [Level = Warning, Source = Application Error] 2021-08-03 14:44:00.1038 11408 AppDynamics.Coordinator 1 9 Info EventLogListener added a new rule for monitoring .NET crash events: [Level = Warning, Source = .NET Runtime] 2021-08-03 14:44:00.1038 11408 AppDynamics.Coordinator 1 9 Info EventLogListener added a new rule for monitoring .NET crash events: [Level = Information, Source = Windows Error Reporting] 2021-08-03 14:44:00.1038 11408 AppDynamics.Coordinator 1 9 Info EventLogListener added a new rule for monitoring .NET crash events: [Level = Warning, Source = WAS] 2021-08-03 14:44:00.1038 11408 AppDynamics.Coordinator 1 9 Info EventLogListener starts listen EventLog: [Application] 2021-08-03 14:44:00.1194 11408 AppDynamics.Coordinator 1 9 Info EventLogListener starts listen EventLog: [System] 2021-08-03 14:44:00.1194 11408 AppDynamics.Coordinator 1 9 Info EventLogListener schedules to clean entries older than [60] sec every [60] sec 2021-08-03 14:44:00.1194 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager Set up agent re-registration task 2021-08-03 14:44:00.1194 11408 AppDynamics.Coordinator 1 9 Info MachineAgentManager Started AppDynamics Machine Agent Successfully. 2021-08-03 14:44:00.7288 11408 AppDynamics.Coordinator 1 11 Info ConfigurationManager Skipping update of environment variable 'InternalAppDynamicsAgent_ProfilerProcesses', as the value is unchanged 2021-08-03 14:44:01.7913 11408 AppDynamics.Coordinator 1 6 Info CoordinatorService Starting communicator... 2021-08-03 14:44:01.7913 11408 AppDynamics.Coordinator 1 9 Info CoordinatorCommunicator starting named pipe server 2021-08-03 14:44:01.8069 11408 AppDynamics.Coordinator 1 9 Info CoordinatorCommunicator named pipe = \\.\pipe\AppDynamicsAgentIPC 2021-08-03 14:44:06.0101 10932 w3wp 4 13 Warn HttpSink Error occurred while attempting to send data to [http://localhost:9090/v2/sinks/bt] ...Then a little later... 2021-08-03 14:45:00.1355 11408 AppDynamics.Coordinator 1 10 Info MachineAgentManager Metrics URL = https://redacted:1234/controller/instance/707/metricregistration 2021-08-03 14:45:00.1355 11408 AppDynamics.Coordinator 1 10 Info MachineAgentManager Metrics Payload = <request><node-id>0</node-id><agent-type>MACHINE_AGENT</agent-type><account-key>redacted</account-key><metric time-rollup-type="AVERAGE" name="Hardware Resources|Network|Outgoing KB/sec" hole-fill-type="REGULAR_COUNTER" cluster-rollup-type="INDIVIDUAL" /><metric time-rollup-type="AVERAGE" name="Hardware Resources|Network|Incoming packets/sec" hole-fill-type="REGULAR_COUNTER" cluster-rollup-type="INDIVIDUAL" />.... But no other instance of "MachineAgent" appears after this in AgentLog.txt, and the server/node appears dead from the perspective of the controller. The "HttpSink Error" error is a new issue as of this morning and might be unrelated. How can I configure/debug to make this work? I'm wondering if the embedded .NET machine agent works at all, since there's practically no mention of it on any other AppDynamics pages.
Splunk installation does not work on one server and below are logs, could you pls point to right direction, where to look at and why this is failing. We have already tried clean boot/Install, Remove... See more...
Splunk installation does not work on one server and below are logs, could you pls point to right direction, where to look at and why this is failing. We have already tried clean boot/Install, Removed AV and any third party security softwares to see that helps or not but it does not. Reboot system multiple times but no luck, Removed Encryption no luck, we are running out of ideas, if you could help, that would be great!     MSI (s) (4C:E4) [09:35:45:084]: Executing op: FileCopy(SourceName=ssmotatu.con|web.conf,SourceCabKey=filFFD0A48B92D564AD2586EEDC3AF570B4,DestName=web.conf,Attributes=512,FileSize=83,PerTick=65536,,VerifyMedia=1,,,,,CheckCRC=0,,,InstallMode=58982400,HashOptions=0,HashPart1=493118281,HashPart2=-1532812437,HashPart3=-875473769,HashPart4=68786463,,) MSI (s) (4C:E4) [09:35:45:085]: File: C:\Program Files\BMW_SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\web.conf; To be installed; Won't patch; No existing file MSI (s) (4C:E4) [09:35:45:085]: Source for file 'filFFD0A48B92D564AD2586EEDC3AF570B4' is compressed MSI (s) (4C:E4) [09:35:45:086]: Executing op: CacheSizeFlush(,) MSI (s) (4C:E4) [09:35:45:086]: Executing op: ActionStart(Name=RollbackRegmonDrv,,) MSI (s) (4C:E4) [09:35:45:092]: Executing op: CustomActionSchedule(Action=RollbackRegmonDrv,ActionType=3329,Source=BinaryData,Target=UninstallRegmonDrvCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;FailCA=) MSI (s) (4C:E4) [09:35:45:097]: Executing op: ActionStart(Name=InstallRegmonDrv,,) MSI (s) (4C:E4) [09:35:45:098]: Executing op: CustomActionSchedule(Action=InstallRegmonDrv,ActionType=3073,Source=BinaryData,Target=InstallRegmonDrvCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;LEGACYDRV=1;FailCA=) MSI (s) (4C:F4) [09:35:45:103]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIAC28.tmp, Entrypoint: InstallRegmonDrvCA MSI (s) (4C:58) [09:35:45:104]: Generating random cookie. MSI (s) (4C:58) [09:35:45:107]: Created Custom Action Server with PID 11196 (0x2BBC). MSI (s) (4C:08) [09:35:45:127]: Running as a service. MSI (s) (4C:08) [09:35:45:130]: Hello, I'm your 64bit Elevated Non-remapped custom action server. InstallRegmonDrv: Warning: Invalid property ignored: FailCA=. MSI (s) (4C:E4) [09:35:45:234]: Executing op: ActionStart(Name=RollbackNetmonDrv,,) InstallRegmonDrv: Info: Driver inf file: C:\Program Files\BMW_SplunkUniversalForwarder\bin\splunkdrv.inf. MSI (s) (4C:E4) [09:35:45:235]: Executing op: CustomActionSchedule(Action=RollbackNetmonDrv,ActionType=3329,Source=BinaryData,Target=UninstallNetmonDrvCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;FailCA=) MSI (s) (4C:E4) [09:35:45:241]: Executing op: ActionStart(Name=InstallNetmonDrv,,) MSI (s) (4C:E4) [09:35:45:242]: Executing op: CustomActionSchedule(Action=InstallNetmonDrv,ActionType=3073,Source=BinaryData,Target=InstallNetmonDrvCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;LEGACYDRV=1;FailCA=) MSI (s) (4C:30) [09:35:45:248]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIACB6.tmp, Entrypoint: InstallNetmonDrvCA InstallNetmonDrv: Warning: Invalid property ignored: FailCA=. MSI (s) (4C:E4) [09:35:45:346]: Executing op: ActionStart(Name=RollbackNohandleDrv,,) InstallNetmonDrv: Info: Driver inf file: C:\Program Files\BMW_SplunkUniversalForwarder\bin\splknetdrv.inf. MSI (s) (4C:E4) [09:35:45:347]: Executing op: CustomActionSchedule(Action=RollbackNohandleDrv,ActionType=3329,Source=BinaryData,Target=UninstallNohandleDrvCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;FailCA=) MSI (s) (4C:E4) [09:35:45:352]: Executing op: ActionStart(Name=InstallNohandleDrv,,) MSI (s) (4C:E4) [09:35:45:353]: Executing op: CustomActionSchedule(Action=InstallNohandleDrv,ActionType=3073,Source=BinaryData,Target=InstallNohandleDrvCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;LEGACYDRV=1;FailCA=) MSI (s) (4C:D8) [09:35:45:359]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIAD24.tmp, Entrypoint: InstallNohandleDrvCA InstallNohandleDrv: Warning: Invalid property ignored: FailCA=. MSI (s) (4C:E4) [09:35:45:456]: Executing op: ActionStart(Name=SavePasswordRules,,) InstallNohandleDrv: Info: Driver inf file: C:\Program Files\BMW_SplunkUniversalForwarder\bin\SplunkMonitorNoHandleDrv.inf. MSI (s) (4C:E4) [09:35:45:458]: Executing op: CustomActionSchedule(Action=SavePasswordRules,ActionType=3073,Source=BinaryData,Target=SavePasswordRulesCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;MinPasswordLowercaseLen=0;MinPasswordUppercaseLen=0;MinPasswordDigitLen=0;MinPasswordSpecialCharLen=0;MinPasswordLen=8;FailCA=) MSI (s) (4C:2C) [09:35:45:463]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIAD93.tmp, Entrypoint: SavePasswordRulesCA MSI (s) (4C:E4) [09:35:45:485]: Executing op: ActionStart(Name=CreateFtr,,) SavePasswordRules: Warning: Invalid property ignored: FailCA=. MSI (s) (4C:E4) [09:35:45:486]: Executing op: CustomActionSchedule(Action=CreateFtr,ActionType=3073,Source=BinaryData,Target=CreateFtrCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;FailCA=) MSI (s) (4C:9C) [09:35:45:492]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIADB3.tmp, Entrypoint: CreateFtrCA MSI (s) (4C:E4) [09:35:45:514]: Executing op: ActionStart(Name=FirstTimeRun,,) CreateFtr: Warning: Invalid property ignored: FailCA=. MSI (s) (4C:E4) [09:35:45:515]: Executing op: CustomActionSchedule(Action=FirstTimeRun,ActionType=3073,Source=BinaryData,Target=FirstTimeRunCA,CustomActionData=SplunkHome=C:\Program Files\BMW_SplunkUniversalForwarder\;FailCA=) MSI (s) (4C:08) [09:35:45:521]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIADD3.tmp, Entrypoint: FirstTimeRunCA FirstTimeRun: Warning: Invalid property ignored: FailCA=. FirstTimeRun: Info: Properties: splunkHome: C:\Program Files\BMW_SplunkUniversalForwarder. FirstTimeRun: Info: Execute first time run. FirstTimeRun: Info: Enter. Args: "C:\Program Files\BMW_SplunkUniversalForwarder\bin\splunk.exe", _internal first-time-run --answer-yes --no-prompt FirstTimeRun: Info: Execute string: cmd.exe /c ""C:\Program Files\BMW_SplunkUniversalForwarder\bin\splunk.exe" _internal first-time-run --answer-yes --no-prompt >> "C:\Users\axy4933\AppData\Local\Temp\splunk.log" 2>&1" FirstTimeRun: Info: WaitForSingleObject returned : 0x0 FirstTimeRun: Info: Exit code for process : 0xc0000409 FirstTimeRun: Info: Leave. FirstTimeRun: Error: ExecCmd failed: 0xc0000409. FirstTimeRun: Error 0x80004005: Cannot execute first time run. CustomAction FirstTimeRun returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) MSI (s) (4C:E4) [09:35:45:886]: Note: 1: 2265 2: 3: -2147287035 MSI (s) (4C:E4) [09:35:45:886]: User policy value 'DisableRollback' is 0 MSI (s) (4C:E4) [09:35:45:886]: Machine policy value 'DisableRollback' is 0 Action ended 09:35:45: InstallFinalize. Return value 3.
I am in a unique situation where I want to use Splunk's REST API to export data to a third party system. Looking at the docs, it seems I am required to use Curl, but unfortunately, its unavailable i... See more...
I am in a unique situation where I want to use Splunk's REST API to export data to a third party system. Looking at the docs, it seems I am required to use Curl, but unfortunately, its unavailable in our environment and cannot be used. Have been told to use wget as alternative. But I have never used wget except for downloading splunk files. Does anyone knows if the below command can be executed using wget ? curl -u admin:changeme \ -k https://localhost:8089/servicesNS/admin/search/search/jobs/1423855196.339/results/ \ --get -d output_mode=json -d count=5
Hi All,   In Splunk is it possible to join two joint queries.   I have queries like  1) index=_inter sourcetype=project  | dedup project  server |  eval Pro=project | eval source1 ="Y"  | t... See more...
Hi All,   In Splunk is it possible to join two joint queries.   I have queries like  1) index=_inter sourcetype=project  | dedup project  server |  eval Pro=project | eval source1 ="Y"  | table source1 Pro | join Pro type=outer | [search sourcetype =SA  pronames=* |  dedup  pronames | eval Pro=pronames  ]  | table Pro  which will generate output pro pro1 pro2 pro3 @and I have one query similar one, but changing sourcetype in join. ,index=_inter sourcetype=project  | dedup project  server |  eval Pro=project | eval source1 ="Y"  | table source1 Pro | join Pro type=outer | [search sourcetype =SC  pronames=* |  dedup  pronames | eval Pro=pronames  ]  | table Pro pro pro1 pro2 pro3 Both I'm using for generating alerts, two alerts. now I want to send only one alert by merging both queries,  is it possible. so i can send alerts in a single mail. like below   pro       pros pro1   pro1 pro2   pro2 pro3   pro3              
I have a query where in I am subtracting 2 dates from the current time.  While my query works, I have noted that if the difference is 2 days in the past then this is reflected as a positive number in... See more...
I have a query where in I am subtracting 2 dates from the current time.  While my query works, I have noted that if the difference is 2 days in the past then this is reflected as a positive number in my table.  For example: I have the following records expiry_date request_id 05/08/2021 1234 05/08/2021 4567 01/08/2021 8901 30/08/2021 2345   My query is      |inputlookup mycurrentrequests.csv | eval requests_past=round(abs((now()-strptime('expiry_date', "%d/%m/%Y")))/86400,0) | where requests_past > 1 AND requests_past < 30       The search will run, however what I will now see in my view is expiry_date request_id requests_past 05/08/2021 1234 2 05/08/2021 4567 2 01/08/2021 8901 2 30/08/2021 2345 27   For the expiry_date of 01/08/2021 this is in the past so technically "2" is correct but I want this to be presented as "-2".     I will then use this to effectively do a "where requests_past is <0" as well as a "where requests_past is > 0" 
Hi @gcusello  , I want to assign role for a user(say dhpd)  but couldn't do so although I'm an admin. Please help me to add roles to a user. Regards, Rahul
Hi , One of our user is getting following error message "Waiting for queued job to start", I increased the the disk space quota but still user is getting same error message. Please help me on this. ... See more...
Hi , One of our user is getting following error message "Waiting for queued job to start", I increased the the disk space quota but still user is getting same error message. Please help me on this. Regards, Rahul Gupta
There is a csv file I had added to a a directory which HF monitors. That input is set as Batch input. Because there was some issue with the data was getting formatted, I deleted the results from th... See more...
There is a csv file I had added to a a directory which HF monitors. That input is set as Batch input. Because there was some issue with the data was getting formatted, I deleted the results from the search head using | delete command. After that to re-ingest, I followed same procedue to reingest the csv file. After the file is added to the directory, it gets deleted due to the move to sink hole policy. However, when I do a search for the same log, nothing shows up. Can someone please help why this is happening and how it can be fixed ?
I have the following log 2021-08-03T14:12:40,872 th=foo cl=bla p=INFO {"tag":"bla","goo":"SPA","msg":{"dir":"in","correlation":"2035456876870723587526","pack":"ebcdic","0":"1234","3":"001234","4":"0... See more...
I have the following log 2021-08-03T14:12:40,872 th=foo cl=bla p=INFO {"tag":"bla","goo":"SPA","msg":{"dir":"in","correlation":"2035456876870723587526","pack":"ebcdic","0":"1234","3":"001234","4":"000000001234","6":"000000001234","7":"0803141240","11":"521464","41":"51400055","47":"ERT0001234000\\ARDABABDGDG\\GRE1234\\VTE01123400824\\GDE00\\SSER\\Ort612348\\Ort072\\rtI0\\","49":"124","61":"12340000004"}} I would like to extract the two fields in RED and Pink and rename field to Co The fields in BOLD GREEN will be key and must be present, rest might or might not. This is what I got so far  index=bla  | rex \"47\":\"*ARD(?<CODA>.{4}) however this is not working and filed is not getting populated.  Thank you
When I download dashboard by PDF or PNG, scrollbar is showing although it doesn't appear in the dashboard page like below picture. (dashboard type is Dashboard Studio)   How can I solve it?
Hello, I've below configuration for one index. maxTotalDataSizeMB = 333400 maxDataSize = auto_high_volume homePath = volume:hotwarm_cold/authentication/db coldPath = volume:hotwarm_cold/authenti... See more...
Hello, I've below configuration for one index. maxTotalDataSizeMB = 333400 maxDataSize = auto_high_volume homePath = volume:hotwarm_cold/authentication/db coldPath = volume:hotwarm_cold/authentication/colddb thawedPath = /splunk/data2/authentication/thaweddb coldToFrozenDir = /splunk/data2/authentication/frozendb tstatsHomePath = volume:hotwarm_cold/authentication/datamodel_summary homePath.maxDataSizeMB = 116700 coldPath.maxDataSizeMB = 216700 maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 2592000 repFactor = auto   Current log volume for this index is 3GB/day. Due to change in requirements, the log volume will increase to ~15GB/day and log retention period will change to 60 days. Could you tell how maxTotalDataSizeMB, homePath.maxDataSizeMB, coldPath.maxDataSizeMB and maxWarmDBCount will be calculated and how the calculation changes with data volume and retention period?
I have the following event from GCP pubsub: {     attributes: {    }    data: {       insertId: dbp95qcbup      logName: organizations/xxxxxxx/logs/cloudaudit.googleapis.com%2Fdata_ac... See more...
I have the following event from GCP pubsub: {     attributes: {    }    data: {       insertId: dbp95qcbup      logName: organizations/xxxxxxx/logs/cloudaudit.googleapis.com%2Fdata_access      protoPayload: { [+]      }      receiveTimestamp: 2021-08-02T05:52:58.861079027Z      resource: { [+]      }      severity: NOTICE      timestamp: 2021-08-02T04:01:48.076823Z    }    publish_time: 1627883579.307 }   Is there any way to use a forwarder to only send the contents of data{} to Splunk? I essentially want to strip off the outer parts of the JSON attributes{}, publishtime and have the event sent as the contents of the data{} field:" { "insertId": "dbp95qcbup", "logName": "organizations/xxxxxxx/logs/cloudaudit.googleapis.com%2Fdata_access", "protoPayload": {}, "receiveTimestamp": "2021-08-02T05:52:58.861079027Z", "resource": {}, "severity": "NOTICE", "timestamp": "2021-08-02T04:01:48.076823Z" }
Hello, So i am trying to create an alert based on logs from 2 different indexes. Basically what im trying to alert on is if a zip file/zip files from 1 index makes it to a 2nd different index, if it... See more...
Hello, So i am trying to create an alert based on logs from 2 different indexes. Basically what im trying to alert on is if a zip file/zip files from 1 index makes it to a 2nd different index, if it does not, i want it to alert. I have the following splunk query that combines both indexes but it's not completely accurate because when i run the indexes separately, im getting the zip files in question to appear in both indexes when in reality, i was expecting the zip files to appear in index 1 and not in index 2. Splunk query combining both indexes     index=index_1 OR index=index_2 sourcetype="index_1_logs" OR sourcetype="index_2_logs" "ftp.com" OR "External command has been executed" "*.zip" | eval results = if(match(index_1_zipfile_field,index_2_zipfile_field), "file made it through", "file did not make it through") | table results index_1_zipfile_field index_2_zipfile_field | search index_1_zipfile_field=* | dedup index_1_zipfile_field     Results show as shown below showing no results under index_2_zipfile_field giving the illusion that the zip files never made it through to index 2: results index_1_zipfile_field index_2_zipfile_field file did not make it through fgfbf-fgfgfg-wewsd-dfsf.zip   file did not make it through ghghh-rtrtr-trtrt-weqe.zip     ...but when i check index 2 and look up the results from the table above, i see the zip file made it through so i am unsure what im doing wrong here     index=index_2 sourcetype=index_2_logs "ftp.com" "*fgfbf-fgfgfg-wewsd-dfsf.zip*" | table index_2_zipfield_field | dedup index_2_zipfield_field     results: index_2_zipfield_field fgfbf-fgfgfg-wewsd-dfsf.zip ghghh-rtrtr-trtrt-weqe.zip   Hopefully i made sense. 
I would like to display a table and have the ability to give the data a category via an input field. So each row would have its own input field for the user to enter a value into it. For example the ... See more...
I would like to display a table and have the ability to give the data a category via an input field. So each row would have its own input field for the user to enter a value into it. For example the data could be something like this where the category column is the input field. I want to put the brand of the car in the category field and save it to a lookup table.  |Car|Category| |Prius | Toyota| |Mustang | Ford |
I could've sworn a month ago, these two worked fine together: https://splunkbase.splunk.com/app/3757/ https://splunkbase.splunk.com/app/4882/ For instance, the App's AAD User's section doesn't l... See more...
I could've sworn a month ago, these two worked fine together: https://splunkbase.splunk.com/app/3757/ https://splunkbase.splunk.com/app/4882/ For instance, the App's AAD User's section doesn't load. It's because fields like user_id and department don't exist. Now I've checked and user_id doesn't exist: https://docs.microsoft.com/en-us/graph/api/resources/user?view=graph-rest-1.0#properties It also seems it's using no parameters, so it will pull in the default implementation (that doesn't include the department field). input_module_MS_AAD_user.py, Line 37 needs to be modified to add the $select parameter. The AAD Users section is just one of many that seem empty/don't work.
Hi Splunkers,   I am having the below issue could you please help me to solve the issue. Here is my event 08-02-2021 20:46:39.852 +0000 WARN DateParserVerbose - Accepted time (Mon Aug 2 20:10:36 ... See more...
Hi Splunkers,   I am having the below issue could you please help me to solve the issue. Here is my event 08-02-2021 20:46:39.852 +0000 WARN DateParserVerbose - Accepted time (Mon Aug 2 20:10:36 2021) is suspiciously far away from the previous event's time (Tue Aug 3 00:18:26 2021), but still accepted because it was extracted by the same pattern. TIME 8/2/21 10:35:55.489 AM EVENT 08-02-2021 10:35:55.489 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Mon Aug 2 10:35:53 2021). Here is my props.conf  [azure:prod] DATETIME_CONFIG = CURRENT TRUNCATE = 10000 MAX_TIMESTAMP_LOOKAHEAD = 128
Hey everyone, As far as I can tell in the current version of dashboard studio there is no way to export to PNG or PDF that does not include the input menus at the top. (Time, multiselects, etc.) Is ... See more...
Hey everyone, As far as I can tell in the current version of dashboard studio there is no way to export to PNG or PDF that does not include the input menus at the top. (Time, multiselects, etc.) Is this correct, or is there something I am missing? Thanks in advance for your help.
Hi Splunkers. Could anyone give me some info on what kind of attacks I can work on based on Linux and Windows logs. I've already working on brute force attacks but my team wants me to work on othe... See more...
Hi Splunkers. Could anyone give me some info on what kind of attacks I can work on based on Linux and Windows logs. I've already working on brute force attacks but my team wants me to work on other possible attacks as well. Please share some knowledge. TIA.
Whenever I try to open Splunk from a masked URL (a URL that points to a URL but shows the first URL in the browser search bar) it says "To protect your security, xxxxxxxxxxxx.com will not allow Firef... See more...
Whenever I try to open Splunk from a masked URL (a URL that points to a URL but shows the first URL in the browser search bar) it says "To protect your security, xxxxxxxxxxxx.com will not allow Firefox to display the page if another site has embedded it. To see this page, you need to open it in a new window." is there any way I can allow this to happen?